00:00:00.001 Started by upstream project "autotest-per-patch" build number 120466 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.097 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.136 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.190 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.865 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.876 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.886 Checking out Revision 690db462df1e37a2e89488b574f5565194c04745 (FETCH_HEAD) 00:00:05.886 > git config core.sparsecheckout # timeout=10 00:00:05.895 > git read-tree -mu HEAD # timeout=10 00:00:05.908 > git checkout -f 690db462df1e37a2e89488b574f5565194c04745 # timeout=5 00:00:05.925 Commit message: "jenkins/jjb-config: Provide PACKER_GITHUB_API_TOKEN to packer" 00:00:05.925 > git rev-list --no-walk 690db462df1e37a2e89488b574f5565194c04745 # timeout=10 00:00:05.995 [Pipeline] Start of Pipeline 00:00:06.008 [Pipeline] library 00:00:06.010 Loading library shm_lib@master 00:00:06.010 Library shm_lib@master is cached. Copying from home. 00:00:06.027 [Pipeline] node 00:00:21.031 Still waiting to schedule task 00:00:21.032 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:38.088 Running on VM-host-SM17 in /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:02:38.090 [Pipeline] { 00:02:38.102 [Pipeline] catchError 00:02:38.103 [Pipeline] { 00:02:38.118 [Pipeline] wrap 00:02:38.127 [Pipeline] { 00:02:38.135 [Pipeline] stage 00:02:38.137 [Pipeline] { (Prologue) 00:02:38.158 [Pipeline] echo 00:02:38.160 Node: VM-host-SM17 00:02:38.165 [Pipeline] cleanWs 00:02:38.188 [WS-CLEANUP] Deleting project workspace... 00:02:38.188 [WS-CLEANUP] Deferred wipeout is used... 00:02:38.194 [WS-CLEANUP] done 00:02:38.333 [Pipeline] setCustomBuildProperty 00:02:38.400 [Pipeline] nodesByLabel 00:02:38.402 Found a total of 1 nodes with the 'sorcerer' label 00:02:38.413 [Pipeline] httpRequest 00:02:38.418 HttpMethod: GET 00:02:38.418 URL: http://10.211.164.101/packages/jbp_690db462df1e37a2e89488b574f5565194c04745.tar.gz 00:02:38.420 Sending request to url: http://10.211.164.101/packages/jbp_690db462df1e37a2e89488b574f5565194c04745.tar.gz 00:02:38.423 Response Code: HTTP/1.1 200 OK 00:02:38.424 Success: Status code 200 is in the accepted range: 200,404 00:02:38.424 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_3/jbp_690db462df1e37a2e89488b574f5565194c04745.tar.gz 00:02:38.561 [Pipeline] sh 00:02:38.840 + tar --no-same-owner -xf jbp_690db462df1e37a2e89488b574f5565194c04745.tar.gz 00:02:38.858 [Pipeline] httpRequest 00:02:38.861 HttpMethod: GET 00:02:38.862 URL: http://10.211.164.101/packages/spdk_2b97e37d606dcd2dcafe4b0ed286ce4c2c9bac20.tar.gz 00:02:38.863 Sending request to url: http://10.211.164.101/packages/spdk_2b97e37d606dcd2dcafe4b0ed286ce4c2c9bac20.tar.gz 00:02:38.863 Response Code: HTTP/1.1 200 OK 00:02:38.864 Success: Status code 200 is in the accepted range: 200,404 00:02:38.864 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk_2b97e37d606dcd2dcafe4b0ed286ce4c2c9bac20.tar.gz 00:02:41.102 [Pipeline] sh 00:02:41.380 + tar --no-same-owner -xf spdk_2b97e37d606dcd2dcafe4b0ed286ce4c2c9bac20.tar.gz 00:02:44.676 [Pipeline] sh 00:02:44.954 + git -C spdk log --oneline -n5 00:02:44.954 2b97e37d6 test/accel: DIF strip accel functional tests 00:02:44.954 da1f487af examples/accel: DIF strip accel perf tests 00:02:44.954 2c7a292ec lib/accel: DIF strip accel SW implementation 00:02:44.954 687da749b lib/env_dpdk: put env_context last on DPDK command line 00:02:44.954 90b54d766 test/app/stub: add command line option to set default io_queue_size 00:02:44.972 [Pipeline] writeFile 00:02:44.989 [Pipeline] sh 00:02:45.268 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:45.278 [Pipeline] sh 00:02:45.553 + cat autorun-spdk.conf 00:02:45.553 SPDK_TEST_UNITTEST=1 00:02:45.553 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.553 SPDK_TEST_NVME=1 00:02:45.553 SPDK_TEST_BLOCKDEV=1 00:02:45.553 SPDK_RUN_ASAN=1 00:02:45.553 SPDK_RUN_UBSAN=1 00:02:45.553 SPDK_TEST_RAID5=1 00:02:45.553 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.560 RUN_NIGHTLY=0 00:02:45.562 [Pipeline] } 00:02:45.577 [Pipeline] // stage 00:02:45.589 [Pipeline] stage 00:02:45.591 [Pipeline] { (Run VM) 00:02:45.604 [Pipeline] sh 00:02:45.884 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:45.884 + echo 'Start stage prepare_nvme.sh' 00:02:45.884 Start stage prepare_nvme.sh 00:02:45.884 + [[ -n 6 ]] 00:02:45.884 + disk_prefix=ex6 00:02:45.884 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest_3 ]] 00:02:45.884 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf ]] 00:02:45.884 + source /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf 00:02:45.884 ++ SPDK_TEST_UNITTEST=1 00:02:45.884 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.884 ++ SPDK_TEST_NVME=1 00:02:45.884 ++ SPDK_TEST_BLOCKDEV=1 00:02:45.884 ++ SPDK_RUN_ASAN=1 00:02:45.884 ++ SPDK_RUN_UBSAN=1 00:02:45.884 ++ SPDK_TEST_RAID5=1 00:02:45.884 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.884 ++ RUN_NIGHTLY=0 00:02:45.884 + cd /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:02:45.884 + nvme_files=() 00:02:45.884 + declare -A nvme_files 00:02:45.884 + backend_dir=/var/lib/libvirt/images/backends 00:02:45.884 + nvme_files['nvme.img']=5G 00:02:45.884 + nvme_files['nvme-cmb.img']=5G 00:02:45.884 + nvme_files['nvme-multi0.img']=4G 00:02:45.884 + nvme_files['nvme-multi1.img']=4G 00:02:45.884 + nvme_files['nvme-multi2.img']=4G 00:02:45.884 + nvme_files['nvme-openstack.img']=8G 00:02:45.884 + nvme_files['nvme-zns.img']=5G 00:02:45.884 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:45.884 + (( SPDK_TEST_FTL == 1 )) 00:02:45.884 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:45.884 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:02:45.884 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:45.884 + for nvme in "${!nvme_files[@]}" 00:02:45.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:02:46.145 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:46.145 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:02:46.145 + echo 'End stage prepare_nvme.sh' 00:02:46.145 End stage prepare_nvme.sh 00:02:46.200 [Pipeline] sh 00:02:46.479 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:46.479 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -H -a -v -f ubuntu2004 00:02:46.479 00:02:46.479 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/scripts/vagrant 00:02:46.479 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk 00:02:46.479 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest_3 00:02:46.479 HELP=0 00:02:46.479 DRY_RUN=0 00:02:46.479 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img, 00:02:46.479 NVME_DISKS_TYPE=nvme, 00:02:46.479 NVME_AUTO_CREATE=0 00:02:46.479 NVME_DISKS_NAMESPACES=, 00:02:46.479 NVME_CMB=, 00:02:46.479 NVME_PMR=, 00:02:46.479 NVME_ZNS=, 00:02:46.479 NVME_MS=, 00:02:46.479 NVME_FDP=, 00:02:46.479 SPDK_VAGRANT_DISTRO=ubuntu2004 00:02:46.479 SPDK_VAGRANT_VMCPU=10 00:02:46.479 SPDK_VAGRANT_VMRAM=12288 00:02:46.479 SPDK_VAGRANT_PROVIDER=libvirt 00:02:46.479 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:46.479 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:46.479 SPDK_OPENSTACK_NETWORK=0 00:02:46.479 VAGRANT_PACKAGE_BOX=0 00:02:46.479 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:46.479 FORCE_DISTRO=true 00:02:46.479 VAGRANT_BOX_VERSION= 00:02:46.479 EXTRA_VAGRANTFILES= 00:02:46.479 NIC_MODEL=e1000 00:02:46.479 00:02:46.479 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt' 00:02:46.479 /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest_3 00:02:49.763 Bringing machine 'default' up with 'libvirt' provider... 00:02:50.328 ==> default: Creating image (snapshot of base box volume). 00:02:50.328 ==> default: Creating domain with the following settings... 00:02:50.328 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1713358013_ad4568ff98913f529862 00:02:50.328 ==> default: -- Domain type: kvm 00:02:50.328 ==> default: -- Cpus: 10 00:02:50.328 ==> default: -- Feature: acpi 00:02:50.328 ==> default: -- Feature: apic 00:02:50.328 ==> default: -- Feature: pae 00:02:50.328 ==> default: -- Memory: 12288M 00:02:50.328 ==> default: -- Memory Backing: hugepages: 00:02:50.329 ==> default: -- Management MAC: 00:02:50.329 ==> default: -- Loader: 00:02:50.329 ==> default: -- Nvram: 00:02:50.329 ==> default: -- Base box: spdk/ubuntu2004 00:02:50.329 ==> default: -- Storage pool: default 00:02:50.329 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1713358013_ad4568ff98913f529862.img (20G) 00:02:50.329 ==> default: -- Volume Cache: default 00:02:50.329 ==> default: -- Kernel: 00:02:50.329 ==> default: -- Initrd: 00:02:50.329 ==> default: -- Graphics Type: vnc 00:02:50.329 ==> default: -- Graphics Port: -1 00:02:50.329 ==> default: -- Graphics IP: 127.0.0.1 00:02:50.329 ==> default: -- Graphics Password: Not defined 00:02:50.329 ==> default: -- Video Type: cirrus 00:02:50.329 ==> default: -- Video VRAM: 9216 00:02:50.329 ==> default: -- Sound Type: 00:02:50.329 ==> default: -- Keymap: en-us 00:02:50.329 ==> default: -- TPM Path: 00:02:50.329 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:50.329 ==> default: -- Command line args: 00:02:50.329 ==> default: -> value=-device, 00:02:50.329 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:50.329 ==> default: -> value=-drive, 00:02:50.329 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:50.329 ==> default: -> value=-device, 00:02:50.329 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:50.585 ==> default: Creating shared folders metadata... 00:02:50.585 ==> default: Starting domain. 00:02:52.486 ==> default: Waiting for domain to get an IP address... 00:03:02.468 ==> default: Waiting for SSH to become available... 00:03:03.035 ==> default: Configuring and enabling network interfaces... 00:03:05.596 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:10.859 ==> default: Mounting SSHFS shared folder... 00:03:11.424 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:03:11.424 ==> default: Checking Mount.. 00:03:13.955 ==> default: Checking Mount.. 00:03:14.213 ==> default: Folder Successfully Mounted! 00:03:14.213 ==> default: Running provisioner: file... 00:03:14.471 default: ~/.gitconfig => .gitconfig 00:03:14.471 00:03:14.471 SUCCESS! 00:03:14.471 00:03:14.471 cd to /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:03:14.471 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:14.471 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt" to destroy all trace of vm. 00:03:14.471 00:03:14.479 [Pipeline] } 00:03:14.495 [Pipeline] // stage 00:03:14.503 [Pipeline] dir 00:03:14.503 Running in /var/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt 00:03:14.505 [Pipeline] { 00:03:14.517 [Pipeline] catchError 00:03:14.519 [Pipeline] { 00:03:14.531 [Pipeline] sh 00:03:14.809 + vagrant ssh-config --host vagrant 00:03:14.809 + sed -ne /^Host/,$p 00:03:14.809 + tee ssh_conf 00:03:18.993 Host vagrant 00:03:18.993 HostName 192.168.121.3 00:03:18.993 User vagrant 00:03:18.993 Port 22 00:03:18.993 UserKnownHostsFile /dev/null 00:03:18.993 StrictHostKeyChecking no 00:03:18.993 PasswordAuthentication no 00:03:18.993 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:03:18.993 IdentitiesOnly yes 00:03:18.993 LogLevel FATAL 00:03:18.993 ForwardAgent yes 00:03:18.993 ForwardX11 yes 00:03:18.993 00:03:19.004 [Pipeline] withEnv 00:03:19.006 [Pipeline] { 00:03:19.020 [Pipeline] sh 00:03:19.297 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:19.297 source /etc/os-release 00:03:19.297 [[ -e /image.version ]] && img=$(< /image.version) 00:03:19.297 # Minimal, systemd-like check. 00:03:19.297 if [[ -e /.dockerenv ]]; then 00:03:19.297 # Clear garbage from the node's name: 00:03:19.297 # agt-er_autotest_547-896 -> autotest_547-896 00:03:19.297 # $HOSTNAME is the actual container id 00:03:19.297 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:19.297 if mountpoint -q /etc/hostname; then 00:03:19.297 # We can assume this is a mount from a host where container is running, 00:03:19.297 # so fetch its hostname to easily identify the target swarm worker. 00:03:19.297 container="$(< /etc/hostname) ($agent)" 00:03:19.297 else 00:03:19.297 # Fallback 00:03:19.297 container=$agent 00:03:19.297 fi 00:03:19.297 fi 00:03:19.297 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:19.297 00:03:19.874 [Pipeline] } 00:03:19.889 [Pipeline] // withEnv 00:03:19.897 [Pipeline] setCustomBuildProperty 00:03:19.909 [Pipeline] stage 00:03:19.911 [Pipeline] { (Tests) 00:03:19.926 [Pipeline] sh 00:03:20.204 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:20.778 [Pipeline] timeout 00:03:20.779 Timeout set to expire in 1 hr 0 min 00:03:20.779 [Pipeline] { 00:03:20.792 [Pipeline] sh 00:03:21.067 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:22.002 HEAD is now at 2b97e37d6 test/accel: DIF strip accel functional tests 00:03:22.016 [Pipeline] sh 00:03:22.293 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:22.859 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:22.872 [Pipeline] sh 00:03:23.149 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:23.747 [Pipeline] sh 00:03:24.044 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:03:24.610 ++ readlink -f spdk_repo 00:03:24.610 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:24.610 + [[ -n /home/vagrant/spdk_repo ]] 00:03:24.610 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:24.611 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:24.611 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:24.611 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:24.611 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:24.611 + cd /home/vagrant/spdk_repo 00:03:24.611 + source /etc/os-release 00:03:24.611 ++ NAME=Ubuntu 00:03:24.611 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:03:24.611 ++ ID=ubuntu 00:03:24.611 ++ ID_LIKE=debian 00:03:24.611 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:03:24.611 ++ VERSION_ID=20.04 00:03:24.611 ++ HOME_URL=https://www.ubuntu.com/ 00:03:24.611 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:03:24.611 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:03:24.611 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:03:24.611 ++ VERSION_CODENAME=focal 00:03:24.611 ++ UBUNTU_CODENAME=focal 00:03:24.611 + uname -a 00:03:24.611 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:03:24.611 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:24.611 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:24.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:03:24.869 Hugepages 00:03:24.869 node hugesize free / total 00:03:24.869 node0 1048576kB 0 / 0 00:03:24.869 node0 2048kB 0 / 0 00:03:24.869 00:03:24.869 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.869 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:24.869 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:24.869 + rm -f /tmp/spdk-ld-path 00:03:24.869 + source autorun-spdk.conf 00:03:24.869 ++ SPDK_TEST_UNITTEST=1 00:03:24.869 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:24.869 ++ SPDK_TEST_NVME=1 00:03:24.869 ++ SPDK_TEST_BLOCKDEV=1 00:03:24.869 ++ SPDK_RUN_ASAN=1 00:03:24.869 ++ SPDK_RUN_UBSAN=1 00:03:24.869 ++ SPDK_TEST_RAID5=1 00:03:24.869 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:24.869 ++ RUN_NIGHTLY=0 00:03:24.869 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:24.869 + [[ -n '' ]] 00:03:24.869 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:24.869 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:24.869 + for M in /var/spdk/build-*-manifest.txt 00:03:24.869 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:24.869 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:25.128 + for M in /var/spdk/build-*-manifest.txt 00:03:25.128 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:25.128 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:25.128 ++ uname 00:03:25.128 + [[ Linux == \L\i\n\u\x ]] 00:03:25.128 + sudo dmesg -T 00:03:25.128 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:25.128 + sudo dmesg --clear 00:03:25.128 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:25.128 + dmesg_pid=2315 00:03:25.128 + [[ Ubuntu == FreeBSD ]] 00:03:25.128 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:25.128 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:25.128 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:25.128 + sudo dmesg -Tw 00:03:25.128 + [[ -x /usr/src/fio-static/fio ]] 00:03:25.128 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:25.128 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:25.128 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:25.128 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:03:25.128 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:25.128 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:25.128 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:25.128 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:25.128 Test configuration: 00:03:25.128 SPDK_TEST_UNITTEST=1 00:03:25.128 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:25.128 SPDK_TEST_NVME=1 00:03:25.128 SPDK_TEST_BLOCKDEV=1 00:03:25.128 SPDK_RUN_ASAN=1 00:03:25.128 SPDK_RUN_UBSAN=1 00:03:25.128 SPDK_TEST_RAID5=1 00:03:25.128 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:25.128 RUN_NIGHTLY=0 12:47:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.128 12:47:28 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:25.128 12:47:28 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.128 12:47:28 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.128 12:47:28 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.128 12:47:28 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.128 12:47:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.128 12:47:28 -- paths/export.sh@5 -- $ export PATH 00:03:25.128 12:47:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:25.128 12:47:28 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.128 12:47:28 -- common/autobuild_common.sh@435 -- $ date +%s 00:03:25.128 12:47:28 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713358048.XXXXXX 00:03:25.128 12:47:28 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713358048.j0p4H3 00:03:25.128 12:47:28 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:03:25.128 12:47:29 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:03:25.128 12:47:29 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:25.128 12:47:29 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:25.128 12:47:29 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:25.128 12:47:29 -- common/autobuild_common.sh@451 -- $ get_config_params 00:03:25.128 12:47:29 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:03:25.128 12:47:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.128 12:47:29 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:03:25.128 12:47:29 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:03:25.128 12:47:29 -- pm/common@17 -- $ local monitor 00:03:25.128 12:47:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.128 12:47:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2351 00:03:25.128 12:47:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.128 12:47:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2352 00:03:25.128 12:47:29 -- pm/common@26 -- $ sleep 1 00:03:25.128 12:47:29 -- pm/common@21 -- $ date +%s 00:03:25.128 12:47:29 -- pm/common@21 -- $ date +%s 00:03:25.128 12:47:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713358049 00:03:25.129 12:47:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713358049 00:03:25.129 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:25.129 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:25.129 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713358049_collect-cpu-load.pm.log 00:03:25.129 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713358049_collect-vmstat.pm.log 00:03:26.064 12:47:30 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:03:26.064 12:47:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:26.064 12:47:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:26.064 12:47:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:26.064 12:47:30 -- spdk/autobuild.sh@16 -- $ date -u 00:03:26.064 Wed Apr 17 12:47:30 UTC 2024 00:03:26.064 12:47:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:26.064 v24.05-pre-361-g2b97e37d6 00:03:26.064 12:47:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:26.064 12:47:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:26.064 12:47:30 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:03:26.064 12:47:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:03:26.064 12:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.322 ************************************ 00:03:26.322 START TEST asan 00:03:26.322 ************************************ 00:03:26.322 using asan 00:03:26.322 12:47:30 -- common/autotest_common.sh@1099 -- $ echo 'using asan' 00:03:26.322 00:03:26.322 real 0m0.000s 00:03:26.322 user 0m0.000s 00:03:26.322 sys 0m0.000s 00:03:26.322 12:47:30 -- common/autotest_common.sh@1100 -- $ xtrace_disable 00:03:26.322 12:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.322 ************************************ 00:03:26.322 END TEST asan 00:03:26.322 ************************************ 00:03:26.322 12:47:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:26.322 12:47:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:26.322 12:47:30 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:03:26.322 12:47:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:03:26.322 12:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.322 ************************************ 00:03:26.322 START TEST ubsan 00:03:26.322 ************************************ 00:03:26.322 using ubsan 00:03:26.322 ************************************ 00:03:26.322 END TEST ubsan 00:03:26.322 ************************************ 00:03:26.322 12:47:30 -- common/autotest_common.sh@1099 -- $ echo 'using ubsan' 00:03:26.322 00:03:26.322 real 0m0.000s 00:03:26.322 user 0m0.000s 00:03:26.322 sys 0m0.000s 00:03:26.322 12:47:30 -- common/autotest_common.sh@1100 -- $ xtrace_disable 00:03:26.322 12:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.322 12:47:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:26.322 12:47:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:26.322 12:47:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:26.322 12:47:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:26.322 12:47:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:26.322 12:47:30 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:26.322 12:47:30 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:26.322 12:47:30 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:03:26.322 12:47:30 -- common/autotest_common.sh@1075 -- $ '[' 2 -le 1 ']' 00:03:26.322 12:47:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:03:26.322 12:47:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.322 ************************************ 00:03:26.322 START TEST unittest_build 00:03:26.322 ************************************ 00:03:26.322 12:47:30 -- common/autotest_common.sh@1099 -- $ _unittest_build 00:03:26.322 12:47:30 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:03:26.322 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:26.322 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:26.888 Using 'verbs' RDMA provider 00:03:42.353 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:54.563 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:54.563 Creating mk/config.mk...done. 00:03:54.563 Creating mk/cc.flags.mk...done. 00:03:54.563 Type 'make' to build. 00:03:54.563 12:47:57 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:54.563 make[1]: Nothing to be done for 'all'. 00:03:55.496 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:55.755 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.014 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.015 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.272 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.272 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.272 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.272 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.272 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.530 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.531 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.531 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:56.789 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.323 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.581 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.581 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.581 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:57.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.098 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.098 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.357 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.357 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.357 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.358 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.358 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.358 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.358 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.358 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.616 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.617 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.617 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.617 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:58.876 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.135 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.135 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.135 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.135 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.394 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.394 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.394 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.394 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.394 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.653 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.653 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.653 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.653 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.653 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.924 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.924 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:03:59.924 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.234 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.493 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:00.752 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.011 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.011 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.011 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.011 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.270 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.529 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.787 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:01.788 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.305 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.305 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.305 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.305 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.563 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.563 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.563 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.563 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.564 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:02.822 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.080 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.338 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.596 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.596 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.596 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.597 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.597 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.597 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.597 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:03.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.113 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.371 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.630 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.630 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.888 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.888 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:04.888 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.146 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.404 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.921 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.921 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.921 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:05.921 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:06.489 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:06.489 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:06.747 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:06.747 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:06.747 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.314 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.315 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.573 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.573 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.573 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.832 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.832 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.832 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:07.832 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.090 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.090 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.090 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.367 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.367 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.659 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.659 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.659 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.659 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.659 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.918 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.918 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.918 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:08.918 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.177 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.177 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.177 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.177 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.437 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.437 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.437 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.695 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.695 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.695 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:09.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.214 The Meson build system 00:04:10.214 Version: 1.4.0 00:04:10.214 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:10.214 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:10.214 Build type: native build 00:04:10.214 Program cat found: YES (/usr/bin/cat) 00:04:10.214 Project name: DPDK 00:04:10.214 Project version: 23.11.0 00:04:10.214 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:04:10.214 C linker for the host machine: cc ld.bfd 2.34 00:04:10.214 Host machine cpu family: x86_64 00:04:10.214 Host machine cpu: x86_64 00:04:10.214 Message: ## Building in Developer Mode ## 00:04:10.214 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:10.214 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:10.214 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:10.214 Program python3 found: YES (/usr/bin/python3) 00:04:10.214 Program cat found: YES (/usr/bin/cat) 00:04:10.214 Compiler for C supports arguments -march=native: YES 00:04:10.214 Checking for size of "void *" : 8 00:04:10.214 Checking for size of "void *" : 8 (cached) 00:04:10.214 Library m found: YES 00:04:10.214 Library numa found: YES 00:04:10.214 Has header "numaif.h" : YES 00:04:10.214 Library fdt found: NO 00:04:10.214 Library execinfo found: NO 00:04:10.214 Has header "execinfo.h" : YES 00:04:10.214 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:04:10.214 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:10.214 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:10.214 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:10.214 Run-time dependency openssl found: YES 1.1.1f 00:04:10.214 Run-time dependency libpcap found: NO (tried pkgconfig) 00:04:10.214 Library pcap found: NO 00:04:10.214 Compiler for C supports arguments -Wcast-qual: YES 00:04:10.214 Compiler for C supports arguments -Wdeprecated: YES 00:04:10.214 Compiler for C supports arguments -Wformat: YES 00:04:10.214 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:10.214 Compiler for C supports arguments -Wformat-security: YES 00:04:10.214 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:10.214 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:10.214 Compiler for C supports arguments -Wnested-externs: YES 00:04:10.214 Compiler for C supports arguments -Wold-style-definition: YES 00:04:10.214 Compiler for C supports arguments -Wpointer-arith: YES 00:04:10.214 Compiler for C supports arguments -Wsign-compare: YES 00:04:10.214 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:10.214 Compiler for C supports arguments -Wundef: YES 00:04:10.214 Compiler for C supports arguments -Wwrite-strings: YES 00:04:10.214 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:10.214 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:10.214 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:10.214 Program objdump found: YES (/usr/bin/objdump) 00:04:10.214 Compiler for C supports arguments -mavx512f: YES 00:04:10.214 Checking if "AVX512 checking" compiles: YES 00:04:10.214 Fetching value of define "__SSE4_2__" : 1 00:04:10.214 Fetching value of define "__AES__" : 1 00:04:10.214 Fetching value of define "__AVX__" : 1 00:04:10.214 Fetching value of define "__AVX2__" : 1 00:04:10.214 Fetching value of define "__AVX512BW__" : (undefined) 00:04:10.214 Fetching value of define "__AVX512CD__" : (undefined) 00:04:10.214 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:10.214 Fetching value of define "__AVX512F__" : (undefined) 00:04:10.214 Fetching value of define "__AVX512VL__" : (undefined) 00:04:10.214 Fetching value of define "__PCLMUL__" : 1 00:04:10.214 Fetching value of define "__RDRND__" : 1 00:04:10.214 Fetching value of define "__RDSEED__" : 1 00:04:10.214 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:10.214 Fetching value of define "__znver1__" : (undefined) 00:04:10.214 Fetching value of define "__znver2__" : (undefined) 00:04:10.214 Fetching value of define "__znver3__" : (undefined) 00:04:10.214 Fetching value of define "__znver4__" : (undefined) 00:04:10.214 Library asan found: YES 00:04:10.214 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:10.214 Message: lib/log: Defining dependency "log" 00:04:10.214 Message: lib/kvargs: Defining dependency "kvargs" 00:04:10.214 Message: lib/telemetry: Defining dependency "telemetry" 00:04:10.215 Library rt found: YES 00:04:10.215 Checking for function "getentropy" : NO 00:04:10.215 Message: lib/eal: Defining dependency "eal" 00:04:10.215 Message: lib/ring: Defining dependency "ring" 00:04:10.215 Message: lib/rcu: Defining dependency "rcu" 00:04:10.215 Message: lib/mempool: Defining dependency "mempool" 00:04:10.215 Message: lib/mbuf: Defining dependency "mbuf" 00:04:10.215 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:10.215 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:10.215 Compiler for C supports arguments -mpclmul: YES 00:04:10.215 Compiler for C supports arguments -maes: YES 00:04:10.215 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:10.215 Compiler for C supports arguments -mavx512bw: YES 00:04:10.215 Compiler for C supports arguments -mavx512dq: YES 00:04:10.215 Compiler for C supports arguments -mavx512vl: YES 00:04:10.215 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:10.215 Compiler for C supports arguments -mavx2: YES 00:04:10.215 Compiler for C supports arguments -mavx: YES 00:04:10.215 Message: lib/net: Defining dependency "net" 00:04:10.215 Message: lib/meter: Defining dependency "meter" 00:04:10.215 Message: lib/ethdev: Defining dependency "ethdev" 00:04:10.215 Message: lib/pci: Defining dependency "pci" 00:04:10.215 Message: lib/cmdline: Defining dependency "cmdline" 00:04:10.215 Message: lib/hash: Defining dependency "hash" 00:04:10.215 Message: lib/timer: Defining dependency "timer" 00:04:10.215 Message: lib/compressdev: Defining dependency "compressdev" 00:04:10.215 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:10.215 Message: lib/dmadev: Defining dependency "dmadev" 00:04:10.215 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:10.215 Message: lib/power: Defining dependency "power" 00:04:10.215 Message: lib/reorder: Defining dependency "reorder" 00:04:10.215 Message: lib/security: Defining dependency "security" 00:04:10.215 Has header "linux/userfaultfd.h" : YES 00:04:10.215 Has header "linux/vduse.h" : NO 00:04:10.215 Message: lib/vhost: Defining dependency "vhost" 00:04:10.215 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:10.215 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:10.215 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:10.215 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:10.215 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:10.215 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:10.215 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:10.215 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:10.215 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:10.215 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:10.215 Program doxygen found: YES (/usr/bin/doxygen) 00:04:10.215 Configuring doxy-api-html.conf using configuration 00:04:10.215 Configuring doxy-api-man.conf using configuration 00:04:10.215 Program mandb found: YES (/usr/bin/mandb) 00:04:10.215 Program sphinx-build found: NO 00:04:10.215 Configuring rte_build_config.h using configuration 00:04:10.215 Message: 00:04:10.215 ================= 00:04:10.215 Applications Enabled 00:04:10.215 ================= 00:04:10.215 00:04:10.215 apps: 00:04:10.215 00:04:10.215 00:04:10.215 Message: 00:04:10.215 ================= 00:04:10.215 Libraries Enabled 00:04:10.215 ================= 00:04:10.215 00:04:10.215 libs: 00:04:10.215 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:10.215 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:10.215 cryptodev, dmadev, power, reorder, security, vhost, 00:04:10.215 00:04:10.215 Message: 00:04:10.215 =============== 00:04:10.215 Drivers Enabled 00:04:10.215 =============== 00:04:10.215 00:04:10.215 common: 00:04:10.215 00:04:10.215 bus: 00:04:10.215 pci, vdev, 00:04:10.215 mempool: 00:04:10.215 ring, 00:04:10.215 dma: 00:04:10.215 00:04:10.215 net: 00:04:10.215 00:04:10.215 crypto: 00:04:10.215 00:04:10.215 compress: 00:04:10.215 00:04:10.215 vdpa: 00:04:10.215 00:04:10.215 00:04:10.215 Message: 00:04:10.215 ================= 00:04:10.215 Content Skipped 00:04:10.215 ================= 00:04:10.215 00:04:10.215 apps: 00:04:10.215 dumpcap: explicitly disabled via build config 00:04:10.215 graph: explicitly disabled via build config 00:04:10.215 pdump: explicitly disabled via build config 00:04:10.215 proc-info: explicitly disabled via build config 00:04:10.215 test-acl: explicitly disabled via build config 00:04:10.215 test-bbdev: explicitly disabled via build config 00:04:10.215 test-cmdline: explicitly disabled via build config 00:04:10.215 test-compress-perf: explicitly disabled via build config 00:04:10.215 test-crypto-perf: explicitly disabled via build config 00:04:10.215 test-dma-perf: explicitly disabled via build config 00:04:10.215 test-eventdev: explicitly disabled via build config 00:04:10.215 test-fib: explicitly disabled via build config 00:04:10.215 test-flow-perf: explicitly disabled via build config 00:04:10.215 test-gpudev: explicitly disabled via build config 00:04:10.215 test-mldev: explicitly disabled via build config 00:04:10.215 test-pipeline: explicitly disabled via build config 00:04:10.215 test-pmd: explicitly disabled via build config 00:04:10.215 test-regex: explicitly disabled via build config 00:04:10.215 test-sad: explicitly disabled via build config 00:04:10.215 test-security-perf: explicitly disabled via build config 00:04:10.215 00:04:10.215 libs: 00:04:10.215 metrics: explicitly disabled via build config 00:04:10.215 acl: explicitly disabled via build config 00:04:10.215 bbdev: explicitly disabled via build config 00:04:10.215 bitratestats: explicitly disabled via build config 00:04:10.215 bpf: explicitly disabled via build config 00:04:10.215 cfgfile: explicitly disabled via build config 00:04:10.215 distributor: explicitly disabled via build config 00:04:10.215 efd: explicitly disabled via build config 00:04:10.215 eventdev: explicitly disabled via build config 00:04:10.215 dispatcher: explicitly disabled via build config 00:04:10.215 gpudev: explicitly disabled via build config 00:04:10.215 gro: explicitly disabled via build config 00:04:10.215 gso: explicitly disabled via build config 00:04:10.215 ip_frag: explicitly disabled via build config 00:04:10.215 jobstats: explicitly disabled via build config 00:04:10.215 latencystats: explicitly disabled via build config 00:04:10.215 lpm: explicitly disabled via build config 00:04:10.215 member: explicitly disabled via build config 00:04:10.215 pcapng: explicitly disabled via build config 00:04:10.215 rawdev: explicitly disabled via build config 00:04:10.215 regexdev: explicitly disabled via build config 00:04:10.215 mldev: explicitly disabled via build config 00:04:10.215 rib: explicitly disabled via build config 00:04:10.215 sched: explicitly disabled via build config 00:04:10.215 stack: explicitly disabled via build config 00:04:10.215 ipsec: explicitly disabled via build config 00:04:10.215 pdcp: explicitly disabled via build config 00:04:10.215 fib: explicitly disabled via build config 00:04:10.215 port: explicitly disabled via build config 00:04:10.215 pdump: explicitly disabled via build config 00:04:10.215 table: explicitly disabled via build config 00:04:10.215 pipeline: explicitly disabled via build config 00:04:10.215 graph: explicitly disabled via build config 00:04:10.215 node: explicitly disabled via build config 00:04:10.215 00:04:10.215 drivers: 00:04:10.215 common/cpt: not in enabled drivers build config 00:04:10.215 common/dpaax: not in enabled drivers build config 00:04:10.215 common/iavf: not in enabled drivers build config 00:04:10.215 common/idpf: not in enabled drivers build config 00:04:10.215 common/mvep: not in enabled drivers build config 00:04:10.215 common/octeontx: not in enabled drivers build config 00:04:10.215 bus/auxiliary: not in enabled drivers build config 00:04:10.215 bus/cdx: not in enabled drivers build config 00:04:10.215 bus/dpaa: not in enabled drivers build config 00:04:10.215 bus/fslmc: not in enabled drivers build config 00:04:10.215 bus/ifpga: not in enabled drivers build config 00:04:10.215 bus/platform: not in enabled drivers build config 00:04:10.215 bus/vmbus: not in enabled drivers build config 00:04:10.215 common/cnxk: not in enabled drivers build config 00:04:10.215 common/mlx5: not in enabled drivers build config 00:04:10.215 common/nfp: not in enabled drivers build config 00:04:10.215 common/qat: not in enabled drivers build config 00:04:10.215 common/sfc_efx: not in enabled drivers build config 00:04:10.215 mempool/bucket: not in enabled drivers build config 00:04:10.215 mempool/cnxk: not in enabled drivers build config 00:04:10.215 mempool/dpaa: not in enabled drivers build config 00:04:10.215 mempool/dpaa2: not in enabled drivers build config 00:04:10.215 mempool/octeontx: not in enabled drivers build config 00:04:10.215 mempool/stack: not in enabled drivers build config 00:04:10.215 dma/cnxk: not in enabled drivers build config 00:04:10.215 dma/dpaa: not in enabled drivers build config 00:04:10.215 dma/dpaa2: not in enabled drivers build config 00:04:10.215 dma/hisilicon: not in enabled drivers build config 00:04:10.215 dma/idxd: not in enabled drivers build config 00:04:10.215 dma/ioat: not in enabled drivers build config 00:04:10.215 dma/skeleton: not in enabled drivers build config 00:04:10.215 net/af_packet: not in enabled drivers build config 00:04:10.215 net/af_xdp: not in enabled drivers build config 00:04:10.215 net/ark: not in enabled drivers build config 00:04:10.215 net/atlantic: not in enabled drivers build config 00:04:10.215 net/avp: not in enabled drivers build config 00:04:10.215 net/axgbe: not in enabled drivers build config 00:04:10.215 net/bnx2x: not in enabled drivers build config 00:04:10.215 net/bnxt: not in enabled drivers build config 00:04:10.215 net/bonding: not in enabled drivers build config 00:04:10.215 net/cnxk: not in enabled drivers build config 00:04:10.215 net/cpfl: not in enabled drivers build config 00:04:10.215 net/cxgbe: not in enabled drivers build config 00:04:10.215 net/dpaa: not in enabled drivers build config 00:04:10.215 net/dpaa2: not in enabled drivers build config 00:04:10.215 net/e1000: not in enabled drivers build config 00:04:10.215 net/ena: not in enabled drivers build config 00:04:10.215 net/enetc: not in enabled drivers build config 00:04:10.215 net/enetfec: not in enabled drivers build config 00:04:10.215 net/enic: not in enabled drivers build config 00:04:10.215 net/failsafe: not in enabled drivers build config 00:04:10.215 net/fm10k: not in enabled drivers build config 00:04:10.216 net/gve: not in enabled drivers build config 00:04:10.216 net/hinic: not in enabled drivers build config 00:04:10.216 net/hns3: not in enabled drivers build config 00:04:10.216 net/i40e: not in enabled drivers build config 00:04:10.216 net/iavf: not in enabled drivers build config 00:04:10.216 net/ice: not in enabled drivers build config 00:04:10.216 net/idpf: not in enabled drivers build config 00:04:10.216 net/igc: not in enabled drivers build config 00:04:10.216 net/ionic: not in enabled drivers build config 00:04:10.216 net/ipn3ke: not in enabled drivers build config 00:04:10.216 net/ixgbe: not in enabled drivers build config 00:04:10.216 net/mana: not in enabled drivers build config 00:04:10.216 net/memif: not in enabled drivers build config 00:04:10.216 net/mlx4: not in enabled drivers build config 00:04:10.216 net/mlx5: not in enabled drivers build config 00:04:10.216 net/mvneta: not in enabled drivers build config 00:04:10.216 net/mvpp2: not in enabled drivers build config 00:04:10.216 net/netvsc: not in enabled drivers build config 00:04:10.216 net/nfb: not in enabled drivers build config 00:04:10.216 net/nfp: not in enabled drivers build config 00:04:10.216 net/ngbe: not in enabled drivers build config 00:04:10.216 net/null: not in enabled drivers build config 00:04:10.216 net/octeontx: not in enabled drivers build config 00:04:10.216 net/octeon_ep: not in enabled drivers build config 00:04:10.216 net/pcap: not in enabled drivers build config 00:04:10.216 net/pfe: not in enabled drivers build config 00:04:10.216 net/qede: not in enabled drivers build config 00:04:10.216 net/ring: not in enabled drivers build config 00:04:10.216 net/sfc: not in enabled drivers build config 00:04:10.216 net/softnic: not in enabled drivers build config 00:04:10.216 net/tap: not in enabled drivers build config 00:04:10.216 net/thunderx: not in enabled drivers build config 00:04:10.216 net/txgbe: not in enabled drivers build config 00:04:10.216 net/vdev_netvsc: not in enabled drivers build config 00:04:10.216 net/vhost: not in enabled drivers build config 00:04:10.216 net/virtio: not in enabled drivers build config 00:04:10.216 net/vmxnet3: not in enabled drivers build config 00:04:10.216 raw/*: missing internal dependency, "rawdev" 00:04:10.216 crypto/armv8: not in enabled drivers build config 00:04:10.216 crypto/bcmfs: not in enabled drivers build config 00:04:10.216 crypto/caam_jr: not in enabled drivers build config 00:04:10.216 crypto/ccp: not in enabled drivers build config 00:04:10.216 crypto/cnxk: not in enabled drivers build config 00:04:10.216 crypto/dpaa_sec: not in enabled drivers build config 00:04:10.216 crypto/dpaa2_sec: not in enabled drivers build config 00:04:10.216 crypto/ipsec_mb: not in enabled drivers build config 00:04:10.216 crypto/mlx5: not in enabled drivers build config 00:04:10.216 crypto/mvsam: not in enabled drivers build config 00:04:10.216 crypto/nitrox: not in enabled drivers build config 00:04:10.216 crypto/null: not in enabled drivers build config 00:04:10.216 crypto/octeontx: not in enabled drivers build config 00:04:10.216 crypto/openssl: not in enabled drivers build config 00:04:10.216 crypto/scheduler: not in enabled drivers build config 00:04:10.216 crypto/uadk: not in enabled drivers build config 00:04:10.216 crypto/virtio: not in enabled drivers build config 00:04:10.216 compress/isal: not in enabled drivers build config 00:04:10.216 compress/mlx5: not in enabled drivers build config 00:04:10.216 compress/octeontx: not in enabled drivers build config 00:04:10.216 compress/zlib: not in enabled drivers build config 00:04:10.216 regex/*: missing internal dependency, "regexdev" 00:04:10.216 ml/*: missing internal dependency, "mldev" 00:04:10.216 vdpa/ifc: not in enabled drivers build config 00:04:10.216 vdpa/mlx5: not in enabled drivers build config 00:04:10.216 vdpa/nfp: not in enabled drivers build config 00:04:10.216 vdpa/sfc: not in enabled drivers build config 00:04:10.216 event/*: missing internal dependency, "eventdev" 00:04:10.216 baseband/*: missing internal dependency, "bbdev" 00:04:10.216 gpu/*: missing internal dependency, "gpudev" 00:04:10.216 00:04:10.216 00:04:10.216 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.216 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.216 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.474 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.474 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.474 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.474 Build targets in project: 85 00:04:10.474 00:04:10.474 DPDK 23.11.0 00:04:10.474 00:04:10.475 User defined options 00:04:10.475 buildtype : debug 00:04:10.475 default_library : static 00:04:10.475 libdir : lib 00:04:10.475 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:10.475 b_sanitize : address 00:04:10.475 c_args : -fPIC -Werror 00:04:10.475 c_link_args : 00:04:10.475 cpu_instruction_set: native 00:04:10.475 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:04:10.475 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:04:10.475 enable_docs : false 00:04:10.475 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:10.475 enable_kmods : false 00:04:10.475 tests : false 00:04:10.475 00:04:10.475 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:10.475 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.475 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.475 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.733 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:04:10.992 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:10.992 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:10.992 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:10.992 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:10.992 [4/264] Linking static target lib/librte_kvargs.a 00:04:11.251 [5/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:11.251 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:11.251 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:11.251 [8/264] Linking static target lib/librte_log.a 00:04:11.251 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:11.251 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:11.251 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:11.251 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:11.509 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:11.510 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:11.510 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:11.510 [16/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:11.510 [17/264] Linking static target lib/librte_telemetry.a 00:04:11.510 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:11.510 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:11.510 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:11.510 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:11.769 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:11.769 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:11.769 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:11.769 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:11.769 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:11.769 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:11.769 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:12.027 [29/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.027 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:12.027 [31/264] Linking target lib/librte_log.so.24.0 00:04:12.027 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:12.027 [33/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:12.027 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:12.027 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:12.027 [36/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:04:12.027 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:12.027 [38/264] Linking target lib/librte_kvargs.so.24.0 00:04:12.027 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:12.027 [40/264] Linking target lib/librte_telemetry.so.24.0 00:04:12.027 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:12.285 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:12.285 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:12.285 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:12.285 [45/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:04:12.285 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:12.285 [47/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:04:12.285 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:12.285 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:12.285 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:12.285 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:12.285 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:12.544 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:12.544 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:12.544 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:12.544 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:12.544 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:12.544 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:12.544 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:12.544 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:12.544 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:12.544 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:12.544 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:12.544 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:12.802 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:12.802 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:12.802 [67/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:12.802 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:12.802 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:13.060 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:13.060 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:13.060 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:13.060 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:13.060 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:13.060 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:13.060 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:13.060 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:13.060 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:13.060 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:13.060 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:13.060 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:13.317 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:13.317 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:13.317 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:13.317 [85/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:13.317 [86/264] Linking static target lib/librte_ring.a 00:04:13.317 [87/264] Linking static target lib/librte_eal.a 00:04:13.576 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:13.576 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:13.576 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:13.576 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:13.576 [92/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:13.576 [93/264] Linking static target lib/librte_mempool.a 00:04:13.576 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:13.576 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:13.885 [96/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:13.885 [97/264] Linking static target lib/librte_rcu.a 00:04:13.885 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:13.885 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:13.885 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:13.885 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:13.885 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:13.885 [103/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.150 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:14.150 [105/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:14.150 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:14.150 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:14.150 [108/264] Linking static target lib/librte_net.a 00:04:14.150 [109/264] Linking static target lib/librte_meter.a 00:04:14.150 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:14.150 [111/264] Linking static target lib/librte_mbuf.a 00:04:14.409 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.409 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:14.409 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:14.409 [115/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.409 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.409 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:14.409 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:14.668 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:14.927 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:14.927 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:14.927 [122/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.927 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:14.927 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:14.927 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:14.927 [126/264] Linking static target lib/librte_pci.a 00:04:15.186 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:15.186 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:15.186 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:15.186 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:15.186 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:15.186 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:15.186 [133/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.186 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:15.186 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:15.186 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:15.186 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:15.445 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:15.445 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:15.445 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:15.445 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:15.445 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:15.445 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:15.445 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:15.445 [145/264] Linking static target lib/librte_cmdline.a 00:04:15.704 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:15.704 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:15.704 [148/264] Linking static target lib/librte_timer.a 00:04:15.704 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:15.963 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:15.963 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:15.963 [152/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:16.221 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:16.221 [154/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.221 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:16.221 [156/264] Linking static target lib/librte_compressdev.a 00:04:16.221 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:16.221 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:16.480 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:16.480 [160/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:16.480 [161/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:16.480 [162/264] Linking static target lib/librte_dmadev.a 00:04:16.480 [163/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:16.480 [164/264] Linking static target lib/librte_hash.a 00:04:16.739 [165/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:16.739 [166/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.739 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:16.739 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:16.739 [169/264] Linking static target lib/librte_ethdev.a 00:04:16.739 [170/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:16.739 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.739 [172/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:16.739 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.998 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:16.998 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:16.998 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:16.998 [177/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:16.998 [178/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:17.257 [179/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.257 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:17.257 [181/264] Linking static target lib/librte_power.a 00:04:17.257 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:17.257 [183/264] Linking static target lib/librte_cryptodev.a 00:04:17.515 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:17.515 [185/264] Linking static target lib/librte_reorder.a 00:04:17.515 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:17.515 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:17.515 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:17.774 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:17.774 [190/264] Linking static target lib/librte_security.a 00:04:17.774 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.033 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.033 [193/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:18.033 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:18.033 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.291 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:18.291 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:18.550 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:18.550 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:18.550 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:18.550 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:18.550 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:18.808 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:18.808 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:18.808 [205/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.808 [206/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:18.808 [207/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:19.067 [208/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:19.067 [209/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:19.067 [210/264] Linking static target drivers/librte_bus_pci.a 00:04:19.067 [211/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:19.067 [212/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:19.067 [213/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:19.067 [214/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:19.067 [215/264] Linking static target drivers/librte_bus_vdev.a 00:04:19.067 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:19.067 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:19.396 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:19.396 [219/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:19.396 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:19.396 [221/264] Linking static target drivers/librte_mempool_ring.a 00:04:19.396 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:19.674 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.627 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.627 [225/264] Linking target lib/librte_eal.so.24.0 00:04:20.886 [226/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:04:20.886 [227/264] Linking target lib/librte_timer.so.24.0 00:04:20.886 [228/264] Linking target lib/librte_ring.so.24.0 00:04:20.886 [229/264] Linking target lib/librte_meter.so.24.0 00:04:20.886 [230/264] Linking target lib/librte_pci.so.24.0 00:04:20.886 [231/264] Linking target lib/librte_dmadev.so.24.0 00:04:20.886 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:04:20.886 [233/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:04:20.886 [234/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:04:20.886 [235/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:04:20.886 [236/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:04:20.886 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:04:21.145 [238/264] Linking target lib/librte_mempool.so.24.0 00:04:21.145 [239/264] Linking target lib/librte_rcu.so.24.0 00:04:21.145 [240/264] Linking target drivers/librte_bus_pci.so.24.0 00:04:21.145 [241/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:04:21.145 [242/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:04:21.145 [243/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:21.145 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:04:21.145 [245/264] Linking target lib/librte_mbuf.so.24.0 00:04:21.403 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:04:21.403 [247/264] Linking target lib/librte_net.so.24.0 00:04:21.403 [248/264] Linking target lib/librte_reorder.so.24.0 00:04:21.403 [249/264] Linking target lib/librte_cryptodev.so.24.0 00:04:21.403 [250/264] Linking target lib/librte_compressdev.so.24.0 00:04:21.403 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:04:21.403 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:04:21.403 [253/264] Linking target lib/librte_cmdline.so.24.0 00:04:21.403 [254/264] Linking target lib/librte_hash.so.24.0 00:04:21.403 [255/264] Linking target lib/librte_security.so.24.0 00:04:21.661 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:04:22.597 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.597 [258/264] Linking target lib/librte_ethdev.so.24.0 00:04:22.597 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:04:22.597 [260/264] Linking target lib/librte_power.so.24.0 00:04:25.949 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:25.949 [262/264] Linking static target lib/librte_vhost.a 00:04:26.884 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.155 [264/264] Linking target lib/librte_vhost.so.24.0 00:04:27.155 INFO: autodetecting backend as ninja 00:04:27.155 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:28.098 CC lib/ut_mock/mock.o 00:04:28.098 CC lib/ut/ut.o 00:04:28.098 CC lib/log/log.o 00:04:28.098 CC lib/log/log_deprecated.o 00:04:28.098 CC lib/log/log_flags.o 00:04:28.098 LIB libspdk_ut.a 00:04:28.356 LIB libspdk_ut_mock.a 00:04:28.356 LIB libspdk_log.a 00:04:28.356 CC lib/util/base64.o 00:04:28.356 CC lib/util/bit_array.o 00:04:28.356 CC lib/util/crc16.o 00:04:28.356 CC lib/dma/dma.o 00:04:28.356 CC lib/util/cpuset.o 00:04:28.356 CC lib/util/crc32.o 00:04:28.356 CC lib/ioat/ioat.o 00:04:28.356 CC lib/util/crc32c.o 00:04:28.356 CXX lib/trace_parser/trace.o 00:04:28.356 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.614 CC lib/vfio_user/host/vfio_user.o 00:04:28.614 CC lib/util/crc32_ieee.o 00:04:28.614 CC lib/util/crc64.o 00:04:28.614 LIB libspdk_dma.a 00:04:28.614 CC lib/util/dif.o 00:04:28.614 CC lib/util/fd.o 00:04:28.614 CC lib/util/file.o 00:04:28.614 CC lib/util/hexlify.o 00:04:28.614 CC lib/util/iov.o 00:04:28.871 CC lib/util/math.o 00:04:28.871 CC lib/util/pipe.o 00:04:28.871 LIB libspdk_ioat.a 00:04:28.871 CC lib/util/strerror_tls.o 00:04:28.871 LIB libspdk_vfio_user.a 00:04:28.871 CC lib/util/string.o 00:04:28.871 CC lib/util/uuid.o 00:04:28.871 CC lib/util/fd_group.o 00:04:28.871 CC lib/util/xor.o 00:04:28.871 CC lib/util/zipf.o 00:04:29.436 LIB libspdk_util.a 00:04:29.436 CC lib/conf/conf.o 00:04:29.436 CC lib/env_dpdk/memory.o 00:04:29.436 CC lib/rdma/rdma_verbs.o 00:04:29.436 CC lib/rdma/common.o 00:04:29.436 CC lib/json/json_parse.o 00:04:29.436 CC lib/env_dpdk/env.o 00:04:29.436 CC lib/json/json_util.o 00:04:29.436 CC lib/vmd/vmd.o 00:04:29.436 CC lib/idxd/idxd.o 00:04:29.694 LIB libspdk_trace_parser.a 00:04:29.694 CC lib/vmd/led.o 00:04:29.694 CC lib/json/json_write.o 00:04:29.694 LIB libspdk_conf.a 00:04:29.694 CC lib/idxd/idxd_user.o 00:04:29.694 CC lib/env_dpdk/pci.o 00:04:29.951 CC lib/env_dpdk/init.o 00:04:29.951 CC lib/env_dpdk/threads.o 00:04:29.951 LIB libspdk_rdma.a 00:04:29.951 CC lib/env_dpdk/pci_ioat.o 00:04:29.951 CC lib/env_dpdk/pci_virtio.o 00:04:29.951 CC lib/env_dpdk/pci_vmd.o 00:04:29.951 CC lib/env_dpdk/pci_idxd.o 00:04:29.951 LIB libspdk_json.a 00:04:30.208 CC lib/env_dpdk/pci_event.o 00:04:30.208 CC lib/env_dpdk/sigbus_handler.o 00:04:30.208 CC lib/env_dpdk/pci_dpdk.o 00:04:30.208 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:30.208 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:30.208 LIB libspdk_idxd.a 00:04:30.208 CC lib/jsonrpc/jsonrpc_server.o 00:04:30.208 CC lib/jsonrpc/jsonrpc_client.o 00:04:30.208 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:30.209 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:30.466 LIB libspdk_vmd.a 00:04:30.466 LIB libspdk_jsonrpc.a 00:04:30.723 CC lib/rpc/rpc.o 00:04:30.981 LIB libspdk_rpc.a 00:04:30.981 CC lib/trace/trace_flags.o 00:04:30.981 CC lib/trace/trace.o 00:04:30.981 CC lib/trace/trace_rpc.o 00:04:30.981 CC lib/notify/notify.o 00:04:30.981 CC lib/notify/notify_rpc.o 00:04:31.275 CC lib/keyring/keyring_rpc.o 00:04:31.275 CC lib/keyring/keyring.o 00:04:31.275 LIB libspdk_env_dpdk.a 00:04:31.275 LIB libspdk_notify.a 00:04:31.275 LIB libspdk_trace.a 00:04:31.532 LIB libspdk_keyring.a 00:04:31.532 CC lib/sock/sock_rpc.o 00:04:31.532 CC lib/sock/sock.o 00:04:31.532 CC lib/thread/thread.o 00:04:31.532 CC lib/thread/iobuf.o 00:04:32.099 LIB libspdk_sock.a 00:04:32.099 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:32.099 CC lib/nvme/nvme_ctrlr.o 00:04:32.099 CC lib/nvme/nvme_fabric.o 00:04:32.099 CC lib/nvme/nvme_ns_cmd.o 00:04:32.099 CC lib/nvme/nvme_ns.o 00:04:32.099 CC lib/nvme/nvme_pcie_common.o 00:04:32.099 CC lib/nvme/nvme_qpair.o 00:04:32.099 CC lib/nvme/nvme.o 00:04:32.099 CC lib/nvme/nvme_pcie.o 00:04:32.665 CC lib/nvme/nvme_quirks.o 00:04:32.924 CC lib/nvme/nvme_transport.o 00:04:32.924 CC lib/nvme/nvme_discovery.o 00:04:32.924 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:32.924 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:32.924 CC lib/nvme/nvme_tcp.o 00:04:32.924 CC lib/nvme/nvme_opal.o 00:04:33.183 CC lib/nvme/nvme_io_msg.o 00:04:33.183 CC lib/nvme/nvme_poll_group.o 00:04:33.441 CC lib/nvme/nvme_zns.o 00:04:33.441 LIB libspdk_thread.a 00:04:33.441 CC lib/nvme/nvme_stubs.o 00:04:33.441 CC lib/nvme/nvme_auth.o 00:04:33.441 CC lib/nvme/nvme_cuse.o 00:04:33.441 CC lib/accel/accel.o 00:04:33.700 CC lib/nvme/nvme_rdma.o 00:04:33.700 CC lib/accel/accel_rpc.o 00:04:33.700 CC lib/accel/accel_sw.o 00:04:33.958 CC lib/blob/blobstore.o 00:04:33.958 CC lib/blob/request.o 00:04:33.958 CC lib/init/json_config.o 00:04:33.958 CC lib/virtio/virtio.o 00:04:33.958 CC lib/virtio/virtio_vhost_user.o 00:04:34.216 CC lib/init/subsystem.o 00:04:34.216 CC lib/blob/zeroes.o 00:04:34.216 CC lib/blob/blob_bs_dev.o 00:04:34.475 CC lib/virtio/virtio_vfio_user.o 00:04:34.475 CC lib/init/subsystem_rpc.o 00:04:34.475 CC lib/virtio/virtio_pci.o 00:04:34.475 CC lib/init/rpc.o 00:04:34.768 LIB libspdk_init.a 00:04:34.768 LIB libspdk_accel.a 00:04:34.768 LIB libspdk_virtio.a 00:04:34.768 CC lib/event/reactor.o 00:04:34.768 CC lib/event/app.o 00:04:34.768 CC lib/event/app_rpc.o 00:04:34.768 CC lib/event/log_rpc.o 00:04:34.768 CC lib/event/scheduler_static.o 00:04:35.026 CC lib/bdev/bdev.o 00:04:35.026 CC lib/bdev/bdev_rpc.o 00:04:35.026 CC lib/bdev/bdev_zone.o 00:04:35.026 CC lib/bdev/part.o 00:04:35.026 CC lib/bdev/scsi_nvme.o 00:04:35.026 LIB libspdk_nvme.a 00:04:35.284 LIB libspdk_event.a 00:04:37.816 LIB libspdk_blob.a 00:04:38.074 CC lib/blobfs/tree.o 00:04:38.074 CC lib/blobfs/blobfs.o 00:04:38.074 CC lib/lvol/lvol.o 00:04:38.332 LIB libspdk_bdev.a 00:04:38.332 CC lib/scsi/dev.o 00:04:38.332 CC lib/scsi/port.o 00:04:38.332 CC lib/scsi/lun.o 00:04:38.332 CC lib/scsi/scsi.o 00:04:38.332 CC lib/scsi/scsi_bdev.o 00:04:38.332 CC lib/nbd/nbd.o 00:04:38.332 CC lib/nvmf/ctrlr.o 00:04:38.333 CC lib/ftl/ftl_core.o 00:04:38.590 CC lib/scsi/scsi_pr.o 00:04:38.590 CC lib/nvmf/ctrlr_discovery.o 00:04:38.590 CC lib/nvmf/ctrlr_bdev.o 00:04:38.849 CC lib/scsi/scsi_rpc.o 00:04:38.849 CC lib/nbd/nbd_rpc.o 00:04:38.849 CC lib/ftl/ftl_init.o 00:04:38.849 CC lib/ftl/ftl_layout.o 00:04:39.107 CC lib/scsi/task.o 00:04:39.107 CC lib/ftl/ftl_debug.o 00:04:39.107 LIB libspdk_nbd.a 00:04:39.107 LIB libspdk_blobfs.a 00:04:39.107 CC lib/ftl/ftl_io.o 00:04:39.107 CC lib/ftl/ftl_sb.o 00:04:39.107 CC lib/nvmf/subsystem.o 00:04:39.365 LIB libspdk_scsi.a 00:04:39.365 CC lib/nvmf/nvmf.o 00:04:39.365 CC lib/ftl/ftl_l2p.o 00:04:39.365 CC lib/nvmf/nvmf_rpc.o 00:04:39.365 CC lib/nvmf/transport.o 00:04:39.365 CC lib/nvmf/tcp.o 00:04:39.365 LIB libspdk_lvol.a 00:04:39.365 CC lib/ftl/ftl_l2p_flat.o 00:04:39.365 CC lib/ftl/ftl_nv_cache.o 00:04:39.623 CC lib/nvmf/rdma.o 00:04:39.623 CC lib/ftl/ftl_band.o 00:04:39.623 CC lib/iscsi/conn.o 00:04:40.189 CC lib/iscsi/init_grp.o 00:04:40.189 CC lib/iscsi/iscsi.o 00:04:40.189 CC lib/iscsi/md5.o 00:04:40.189 CC lib/iscsi/param.o 00:04:40.447 CC lib/ftl/ftl_band_ops.o 00:04:40.447 CC lib/ftl/ftl_writer.o 00:04:40.447 CC lib/iscsi/portal_grp.o 00:04:40.447 CC lib/iscsi/tgt_node.o 00:04:40.447 CC lib/ftl/ftl_rq.o 00:04:40.704 CC lib/ftl/ftl_reloc.o 00:04:40.704 CC lib/iscsi/iscsi_subsystem.o 00:04:40.704 CC lib/iscsi/iscsi_rpc.o 00:04:40.704 CC lib/iscsi/task.o 00:04:40.704 CC lib/ftl/ftl_l2p_cache.o 00:04:40.962 CC lib/vhost/vhost.o 00:04:40.962 CC lib/vhost/vhost_rpc.o 00:04:40.962 CC lib/vhost/vhost_scsi.o 00:04:40.962 CC lib/vhost/vhost_blk.o 00:04:41.219 CC lib/vhost/rte_vhost_user.o 00:04:41.219 CC lib/ftl/ftl_p2l.o 00:04:41.478 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.478 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.478 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.478 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.478 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:41.736 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:41.994 LIB libspdk_iscsi.a 00:04:41.994 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:41.994 CC lib/ftl/utils/ftl_conf.o 00:04:41.994 CC lib/ftl/utils/ftl_md.o 00:04:41.994 CC lib/ftl/utils/ftl_mempool.o 00:04:41.994 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.994 CC lib/ftl/utils/ftl_property.o 00:04:41.994 LIB libspdk_nvmf.a 00:04:41.994 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.284 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.284 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.284 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.284 LIB libspdk_vhost.a 00:04:42.284 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.284 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.284 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.284 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.284 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.284 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.284 CC lib/ftl/base/ftl_base_dev.o 00:04:42.284 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.284 CC lib/ftl/ftl_trace.o 00:04:42.860 LIB libspdk_ftl.a 00:04:43.119 CC module/env_dpdk/env_dpdk_rpc.o 00:04:43.119 CC module/accel/error/accel_error.o 00:04:43.119 CC module/accel/ioat/accel_ioat.o 00:04:43.119 CC module/keyring/file/keyring.o 00:04:43.119 CC module/blob/bdev/blob_bdev.o 00:04:43.119 CC module/scheduler/gscheduler/gscheduler.o 00:04:43.119 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:43.119 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:43.119 CC module/sock/posix/posix.o 00:04:43.119 CC module/accel/dsa/accel_dsa.o 00:04:43.119 LIB libspdk_env_dpdk_rpc.a 00:04:43.378 CC module/accel/ioat/accel_ioat_rpc.o 00:04:43.378 LIB libspdk_scheduler_gscheduler.a 00:04:43.378 LIB libspdk_scheduler_dpdk_governor.a 00:04:43.378 CC module/accel/error/accel_error_rpc.o 00:04:43.378 CC module/keyring/file/keyring_rpc.o 00:04:43.378 CC module/accel/dsa/accel_dsa_rpc.o 00:04:43.378 LIB libspdk_accel_ioat.a 00:04:43.378 LIB libspdk_scheduler_dynamic.a 00:04:43.378 LIB libspdk_accel_error.a 00:04:43.378 CC module/keyring/linux/keyring.o 00:04:43.378 CC module/keyring/linux/keyring_rpc.o 00:04:43.378 CC module/accel/iaa/accel_iaa.o 00:04:43.378 CC module/accel/iaa/accel_iaa_rpc.o 00:04:43.378 LIB libspdk_keyring_file.a 00:04:43.378 LIB libspdk_accel_dsa.a 00:04:43.636 LIB libspdk_blob_bdev.a 00:04:43.636 LIB libspdk_keyring_linux.a 00:04:43.636 LIB libspdk_accel_iaa.a 00:04:43.636 CC module/bdev/gpt/gpt.o 00:04:43.636 CC module/bdev/error/vbdev_error.o 00:04:43.636 CC module/bdev/lvol/vbdev_lvol.o 00:04:43.636 CC module/blobfs/bdev/blobfs_bdev.o 00:04:43.636 CC module/bdev/null/bdev_null.o 00:04:43.636 CC module/bdev/delay/vbdev_delay.o 00:04:43.636 CC module/bdev/malloc/bdev_malloc.o 00:04:43.895 CC module/bdev/nvme/bdev_nvme.o 00:04:43.895 CC module/bdev/passthru/vbdev_passthru.o 00:04:43.895 CC module/bdev/gpt/vbdev_gpt.o 00:04:43.895 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:43.895 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.153 CC module/bdev/null/bdev_null_rpc.o 00:04:44.153 LIB libspdk_sock_posix.a 00:04:44.153 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:44.153 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.153 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:44.153 LIB libspdk_blobfs_bdev.a 00:04:44.153 LIB libspdk_bdev_error.a 00:04:44.153 CC module/bdev/nvme/nvme_rpc.o 00:04:44.153 LIB libspdk_bdev_gpt.a 00:04:44.153 CC module/bdev/nvme/bdev_mdns_client.o 00:04:44.153 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:44.153 LIB libspdk_bdev_null.a 00:04:44.153 CC module/bdev/nvme/vbdev_opal.o 00:04:44.411 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:44.411 LIB libspdk_bdev_delay.a 00:04:44.411 LIB libspdk_bdev_malloc.a 00:04:44.411 CC module/bdev/raid/bdev_raid.o 00:04:44.411 CC module/bdev/raid/bdev_raid_rpc.o 00:04:44.411 LIB libspdk_bdev_passthru.a 00:04:44.411 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:44.411 CC module/bdev/split/vbdev_split.o 00:04:44.669 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:44.669 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:44.669 CC module/bdev/aio/bdev_aio.o 00:04:44.669 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:44.669 LIB libspdk_bdev_lvol.a 00:04:44.669 CC module/bdev/split/vbdev_split_rpc.o 00:04:44.926 CC module/bdev/aio/bdev_aio_rpc.o 00:04:44.926 CC module/bdev/ftl/bdev_ftl.o 00:04:44.926 CC module/bdev/iscsi/bdev_iscsi.o 00:04:44.926 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:44.926 LIB libspdk_bdev_split.a 00:04:44.926 LIB libspdk_bdev_zone_block.a 00:04:44.926 CC module/bdev/raid/bdev_raid_sb.o 00:04:44.926 CC module/bdev/raid/raid0.o 00:04:44.926 CC module/bdev/raid/raid1.o 00:04:44.926 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.184 LIB libspdk_bdev_aio.a 00:04:45.184 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:45.184 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.184 CC module/bdev/raid/concat.o 00:04:45.184 CC module/bdev/raid/raid5f.o 00:04:45.184 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:45.184 LIB libspdk_bdev_ftl.a 00:04:45.442 LIB libspdk_bdev_iscsi.a 00:04:45.442 LIB libspdk_bdev_virtio.a 00:04:45.700 LIB libspdk_bdev_raid.a 00:04:46.635 LIB libspdk_bdev_nvme.a 00:04:46.893 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:46.893 CC module/event/subsystems/vmd/vmd.o 00:04:46.893 CC module/event/subsystems/iobuf/iobuf.o 00:04:46.893 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:46.893 CC module/event/subsystems/sock/sock.o 00:04:46.893 CC module/event/subsystems/scheduler/scheduler.o 00:04:46.893 CC module/event/subsystems/keyring/keyring.o 00:04:46.893 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:46.893 LIB libspdk_event_vhost_blk.a 00:04:46.893 LIB libspdk_event_sock.a 00:04:46.893 LIB libspdk_event_keyring.a 00:04:46.893 LIB libspdk_event_vmd.a 00:04:46.893 LIB libspdk_event_iobuf.a 00:04:46.893 LIB libspdk_event_scheduler.a 00:04:47.151 CC module/event/subsystems/accel/accel.o 00:04:47.410 LIB libspdk_event_accel.a 00:04:47.410 CC module/event/subsystems/bdev/bdev.o 00:04:47.668 LIB libspdk_event_bdev.a 00:04:47.926 CC module/event/subsystems/nbd/nbd.o 00:04:47.926 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:47.926 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:47.926 CC module/event/subsystems/scsi/scsi.o 00:04:47.926 LIB libspdk_event_nbd.a 00:04:47.926 LIB libspdk_event_scsi.a 00:04:48.184 LIB libspdk_event_nvmf.a 00:04:48.184 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:48.184 CC module/event/subsystems/iscsi/iscsi.o 00:04:48.442 LIB libspdk_event_vhost_scsi.a 00:04:48.442 LIB libspdk_event_iscsi.a 00:04:48.442 CC app/trace_record/trace_record.o 00:04:48.442 CXX app/trace/trace.o 00:04:48.442 CC app/spdk_lspci/spdk_lspci.o 00:04:48.700 CC app/iscsi_tgt/iscsi_tgt.o 00:04:48.700 CC app/nvmf_tgt/nvmf_main.o 00:04:48.700 CC app/spdk_tgt/spdk_tgt.o 00:04:48.700 CC examples/accel/perf/accel_perf.o 00:04:48.700 CC test/bdev/bdevio/bdevio.o 00:04:48.700 CC test/accel/dif/dif.o 00:04:48.700 CC test/app/bdev_svc/bdev_svc.o 00:04:48.700 LINK spdk_lspci 00:04:48.700 LINK spdk_trace_record 00:04:48.958 LINK nvmf_tgt 00:04:48.958 LINK iscsi_tgt 00:04:48.958 LINK spdk_tgt 00:04:48.958 LINK bdev_svc 00:04:49.216 LINK spdk_trace 00:04:49.216 LINK bdevio 00:04:49.216 LINK dif 00:04:49.216 LINK accel_perf 00:04:49.474 CC app/spdk_nvme_perf/perf.o 00:04:49.474 CC test/blobfs/mkfs/mkfs.o 00:04:49.746 LINK mkfs 00:04:49.746 TEST_HEADER include/spdk/ioat.h 00:04:49.746 TEST_HEADER include/spdk/blobfs.h 00:04:50.035 TEST_HEADER include/spdk/notify.h 00:04:50.035 TEST_HEADER include/spdk/pipe.h 00:04:50.035 TEST_HEADER include/spdk/accel.h 00:04:50.035 TEST_HEADER include/spdk/file.h 00:04:50.035 TEST_HEADER include/spdk/version.h 00:04:50.035 TEST_HEADER include/spdk/trace_parser.h 00:04:50.035 TEST_HEADER include/spdk/opal_spec.h 00:04:50.035 TEST_HEADER include/spdk/uuid.h 00:04:50.035 TEST_HEADER include/spdk/likely.h 00:04:50.035 TEST_HEADER include/spdk/dif.h 00:04:50.035 TEST_HEADER include/spdk/keyring_module.h 00:04:50.035 TEST_HEADER include/spdk/memory.h 00:04:50.035 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:50.035 TEST_HEADER include/spdk/dma.h 00:04:50.035 TEST_HEADER include/spdk/nbd.h 00:04:50.035 TEST_HEADER include/spdk/conf.h 00:04:50.035 TEST_HEADER include/spdk/env_dpdk.h 00:04:50.035 TEST_HEADER include/spdk/nvmf_spec.h 00:04:50.035 TEST_HEADER include/spdk/iscsi_spec.h 00:04:50.035 TEST_HEADER include/spdk/mmio.h 00:04:50.035 TEST_HEADER include/spdk/json.h 00:04:50.035 TEST_HEADER include/spdk/opal.h 00:04:50.035 TEST_HEADER include/spdk/bdev.h 00:04:50.035 TEST_HEADER include/spdk/keyring.h 00:04:50.035 TEST_HEADER include/spdk/base64.h 00:04:50.035 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:50.035 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:50.035 TEST_HEADER include/spdk/fd.h 00:04:50.035 TEST_HEADER include/spdk/barrier.h 00:04:50.035 TEST_HEADER include/spdk/scsi_spec.h 00:04:50.035 TEST_HEADER include/spdk/zipf.h 00:04:50.035 TEST_HEADER include/spdk/nvmf.h 00:04:50.035 TEST_HEADER include/spdk/queue.h 00:04:50.035 TEST_HEADER include/spdk/xor.h 00:04:50.035 TEST_HEADER include/spdk/cpuset.h 00:04:50.035 TEST_HEADER include/spdk/thread.h 00:04:50.035 TEST_HEADER include/spdk/bdev_zone.h 00:04:50.035 TEST_HEADER include/spdk/fd_group.h 00:04:50.035 TEST_HEADER include/spdk/tree.h 00:04:50.035 TEST_HEADER include/spdk/blob_bdev.h 00:04:50.035 TEST_HEADER include/spdk/crc64.h 00:04:50.035 TEST_HEADER include/spdk/assert.h 00:04:50.035 TEST_HEADER include/spdk/nvme_spec.h 00:04:50.035 TEST_HEADER include/spdk/endian.h 00:04:50.035 TEST_HEADER include/spdk/pci_ids.h 00:04:50.035 TEST_HEADER include/spdk/log.h 00:04:50.036 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:50.036 TEST_HEADER include/spdk/ftl.h 00:04:50.036 TEST_HEADER include/spdk/config.h 00:04:50.036 TEST_HEADER include/spdk/vhost.h 00:04:50.036 TEST_HEADER include/spdk/bdev_module.h 00:04:50.036 TEST_HEADER include/spdk/nvme_intel.h 00:04:50.036 TEST_HEADER include/spdk/idxd_spec.h 00:04:50.036 TEST_HEADER include/spdk/crc16.h 00:04:50.036 TEST_HEADER include/spdk/nvme.h 00:04:50.036 TEST_HEADER include/spdk/stdinc.h 00:04:50.036 TEST_HEADER include/spdk/scsi.h 00:04:50.036 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:50.036 TEST_HEADER include/spdk/idxd.h 00:04:50.036 TEST_HEADER include/spdk/hexlify.h 00:04:50.036 TEST_HEADER include/spdk/reduce.h 00:04:50.036 TEST_HEADER include/spdk/crc32.h 00:04:50.036 TEST_HEADER include/spdk/init.h 00:04:50.036 TEST_HEADER include/spdk/nvmf_transport.h 00:04:50.036 TEST_HEADER include/spdk/nvme_zns.h 00:04:50.036 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:50.036 TEST_HEADER include/spdk/util.h 00:04:50.036 TEST_HEADER include/spdk/jsonrpc.h 00:04:50.036 TEST_HEADER include/spdk/env.h 00:04:50.036 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:50.036 TEST_HEADER include/spdk/lvol.h 00:04:50.036 TEST_HEADER include/spdk/histogram_data.h 00:04:50.036 TEST_HEADER include/spdk/event.h 00:04:50.036 TEST_HEADER include/spdk/trace.h 00:04:50.036 TEST_HEADER include/spdk/ioat_spec.h 00:04:50.036 TEST_HEADER include/spdk/string.h 00:04:50.036 TEST_HEADER include/spdk/ublk.h 00:04:50.036 TEST_HEADER include/spdk/bit_array.h 00:04:50.036 TEST_HEADER include/spdk/scheduler.h 00:04:50.036 TEST_HEADER include/spdk/blob.h 00:04:50.036 TEST_HEADER include/spdk/gpt_spec.h 00:04:50.036 TEST_HEADER include/spdk/sock.h 00:04:50.036 TEST_HEADER include/spdk/vmd.h 00:04:50.036 TEST_HEADER include/spdk/rpc.h 00:04:50.036 TEST_HEADER include/spdk/accel_module.h 00:04:50.036 TEST_HEADER include/spdk/bit_pool.h 00:04:50.036 CXX test/cpp_headers/ioat.o 00:04:50.301 CXX test/cpp_headers/blobfs.o 00:04:50.301 CC examples/bdev/hello_world/hello_bdev.o 00:04:50.301 LINK spdk_nvme_perf 00:04:50.301 CXX test/cpp_headers/notify.o 00:04:50.560 LINK hello_bdev 00:04:50.560 CXX test/cpp_headers/pipe.o 00:04:50.818 CXX test/cpp_headers/accel.o 00:04:51.076 CXX test/cpp_headers/file.o 00:04:51.076 CXX test/cpp_headers/version.o 00:04:51.076 CXX test/cpp_headers/trace_parser.o 00:04:51.334 CXX test/cpp_headers/opal_spec.o 00:04:51.334 CC examples/bdev/bdevperf/bdevperf.o 00:04:51.592 CXX test/cpp_headers/uuid.o 00:04:51.592 CXX test/cpp_headers/likely.o 00:04:51.850 CXX test/cpp_headers/dif.o 00:04:51.850 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.850 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.109 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.109 CXX test/cpp_headers/keyring_module.o 00:04:52.109 CC app/spdk_nvme_identify/identify.o 00:04:52.109 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.109 CC test/dma/test_dma/test_dma.o 00:04:52.109 CXX test/cpp_headers/memory.o 00:04:52.367 LINK nvme_fuzz 00:04:52.367 LINK bdevperf 00:04:52.367 CXX test/cpp_headers/vfio_user_pci.o 00:04:52.367 CC app/spdk_nvme_discover/discovery_aer.o 00:04:52.626 LINK test_dma 00:04:52.626 CXX test/cpp_headers/dma.o 00:04:52.626 LINK vhost_fuzz 00:04:52.626 LINK spdk_nvme_discover 00:04:52.626 CXX test/cpp_headers/nbd.o 00:04:52.884 CC test/env/mem_callbacks/mem_callbacks.o 00:04:52.884 CXX test/cpp_headers/conf.o 00:04:52.884 LINK spdk_nvme_identify 00:04:52.884 CXX test/cpp_headers/env_dpdk.o 00:04:53.142 CXX test/cpp_headers/nvmf_spec.o 00:04:53.142 LINK mem_callbacks 00:04:53.142 CXX test/cpp_headers/iscsi_spec.o 00:04:53.400 CXX test/cpp_headers/mmio.o 00:04:53.400 CC test/env/vtophys/vtophys.o 00:04:53.658 CC examples/blob/hello_world/hello_blob.o 00:04:53.658 CXX test/cpp_headers/json.o 00:04:53.658 CC examples/blob/cli/blobcli.o 00:04:53.658 LINK vtophys 00:04:53.658 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:53.658 CXX test/cpp_headers/opal.o 00:04:53.917 LINK hello_blob 00:04:53.917 LINK env_dpdk_post_init 00:04:53.917 CC test/env/memory/memory_ut.o 00:04:53.917 CXX test/cpp_headers/bdev.o 00:04:53.917 CC app/spdk_top/spdk_top.o 00:04:53.917 LINK iscsi_fuzz 00:04:54.175 LINK blobcli 00:04:54.175 CXX test/cpp_headers/keyring.o 00:04:54.175 CXX test/cpp_headers/base64.o 00:04:54.434 CXX test/cpp_headers/blobfs_bdev.o 00:04:54.434 CC app/vhost/vhost.o 00:04:54.692 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.692 LINK vhost 00:04:54.692 LINK memory_ut 00:04:54.950 CXX test/cpp_headers/fd.o 00:04:54.950 CXX test/cpp_headers/barrier.o 00:04:54.950 LINK spdk_top 00:04:54.950 CC test/env/pci/pci_ut.o 00:04:55.208 CC test/event/event_perf/event_perf.o 00:04:55.208 CXX test/cpp_headers/scsi_spec.o 00:04:55.208 CC test/app/histogram_perf/histogram_perf.o 00:04:55.208 CC examples/ioat/perf/perf.o 00:04:55.208 CC examples/nvme/hello_world/hello_world.o 00:04:55.208 LINK event_perf 00:04:55.208 CXX test/cpp_headers/zipf.o 00:04:55.208 LINK histogram_perf 00:04:55.775 CXX test/cpp_headers/nvmf.o 00:04:55.775 LINK pci_ut 00:04:55.775 LINK hello_world 00:04:55.775 LINK ioat_perf 00:04:55.775 CC test/lvol/esnap/esnap.o 00:04:55.775 CXX test/cpp_headers/queue.o 00:04:56.034 CC test/app/jsoncat/jsoncat.o 00:04:56.292 CXX test/cpp_headers/xor.o 00:04:56.292 CC test/event/reactor/reactor.o 00:04:56.292 CC examples/nvme/reconnect/reconnect.o 00:04:56.292 LINK jsoncat 00:04:56.292 LINK reactor 00:04:56.292 CC examples/ioat/verify/verify.o 00:04:56.597 CXX test/cpp_headers/cpuset.o 00:04:56.597 CC test/event/reactor_perf/reactor_perf.o 00:04:56.597 LINK reconnect 00:04:56.869 CXX test/cpp_headers/thread.o 00:04:57.129 LINK reactor_perf 00:04:57.129 CC test/app/stub/stub.o 00:04:57.129 LINK verify 00:04:57.129 CXX test/cpp_headers/bdev_zone.o 00:04:57.129 CC test/rpc_client/rpc_client_test.o 00:04:57.129 CC test/nvme/aer/aer.o 00:04:57.388 CC test/thread/poller_perf/poller_perf.o 00:04:57.388 LINK stub 00:04:57.388 LINK rpc_client_test 00:04:57.388 CXX test/cpp_headers/fd_group.o 00:04:57.388 LINK poller_perf 00:04:57.388 LINK aer 00:04:57.646 CXX test/cpp_headers/tree.o 00:04:57.646 CXX test/cpp_headers/blob_bdev.o 00:04:57.646 CXX test/cpp_headers/crc64.o 00:04:57.905 CXX test/cpp_headers/assert.o 00:04:57.905 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:57.905 CC app/spdk_dd/spdk_dd.o 00:04:57.905 CC test/event/app_repeat/app_repeat.o 00:04:57.905 CC app/fio/nvme/fio_plugin.o 00:04:57.905 CXX test/cpp_headers/nvme_spec.o 00:04:57.905 CC test/event/scheduler/scheduler.o 00:04:58.163 LINK app_repeat 00:04:58.163 CXX test/cpp_headers/endian.o 00:04:58.163 CC test/thread/lock/spdk_lock.o 00:04:58.163 LINK spdk_dd 00:04:58.163 LINK scheduler 00:04:58.421 CXX test/cpp_headers/pci_ids.o 00:04:58.421 LINK nvme_manage 00:04:58.421 CXX test/cpp_headers/log.o 00:04:58.679 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:58.679 LINK spdk_nvme 00:04:58.679 CC examples/sock/hello_world/hello_sock.o 00:04:58.679 CC test/nvme/reset/reset.o 00:04:58.679 CXX test/cpp_headers/ftl.o 00:04:58.937 LINK hello_sock 00:04:58.937 LINK reset 00:04:58.937 CXX test/cpp_headers/config.o 00:04:58.937 CXX test/cpp_headers/vhost.o 00:04:59.195 CXX test/cpp_headers/bdev_module.o 00:04:59.195 CC test/nvme/sgl/sgl.o 00:04:59.453 CXX test/cpp_headers/nvme_intel.o 00:04:59.453 CC examples/nvme/arbitration/arbitration.o 00:04:59.453 LINK sgl 00:04:59.453 CXX test/cpp_headers/idxd_spec.o 00:04:59.711 CXX test/cpp_headers/crc16.o 00:04:59.711 CC app/fio/bdev/fio_plugin.o 00:04:59.711 LINK arbitration 00:04:59.981 CXX test/cpp_headers/nvme.o 00:04:59.981 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:59.981 CXX test/cpp_headers/stdinc.o 00:04:59.981 CXX test/cpp_headers/scsi.o 00:04:59.981 LINK histogram_ut 00:05:00.241 LINK spdk_lock 00:05:00.241 CC test/nvme/e2edp/nvme_dp.o 00:05:00.241 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:00.241 LINK spdk_bdev 00:05:00.499 CC test/unit/lib/accel/accel.c/accel_ut.o 00:05:00.499 CXX test/cpp_headers/idxd.o 00:05:00.499 LINK nvme_dp 00:05:00.757 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:05:00.757 CXX test/cpp_headers/hexlify.o 00:05:00.757 CC examples/nvme/hotplug/hotplug.o 00:05:00.757 CXX test/cpp_headers/reduce.o 00:05:01.015 CXX test/cpp_headers/crc32.o 00:05:01.015 LINK hotplug 00:05:01.015 CXX test/cpp_headers/init.o 00:05:01.015 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:05:01.272 CC test/unit/lib/blob/blob.c/blob_ut.o 00:05:01.272 CXX test/cpp_headers/nvmf_transport.o 00:05:01.272 CC test/nvme/overhead/overhead.o 00:05:01.530 CXX test/cpp_headers/nvme_zns.o 00:05:01.530 LINK esnap 00:05:01.530 LINK overhead 00:05:01.788 CXX test/cpp_headers/vfio_user_spec.o 00:05:01.788 LINK blob_bdev_ut 00:05:01.788 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:05:01.788 CXX test/cpp_headers/util.o 00:05:02.047 LINK tree_ut 00:05:02.047 CXX test/cpp_headers/jsonrpc.o 00:05:02.047 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:02.047 CC test/unit/lib/dma/dma.c/dma_ut.o 00:05:02.305 CXX test/cpp_headers/env.o 00:05:02.305 CC test/unit/lib/event/app.c/app_ut.o 00:05:02.305 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:05:02.305 LINK cmb_copy 00:05:02.562 CXX test/cpp_headers/nvmf_cmd.o 00:05:02.562 LINK dma_ut 00:05:02.820 CC test/nvme/err_injection/err_injection.o 00:05:02.820 CXX test/cpp_headers/lvol.o 00:05:03.078 LINK app_ut 00:05:03.078 CXX test/cpp_headers/histogram_data.o 00:05:03.078 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:05:03.078 LINK err_injection 00:05:03.078 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:05:03.336 CXX test/cpp_headers/event.o 00:05:03.336 CXX test/cpp_headers/trace.o 00:05:03.594 LINK accel_ut 00:05:03.594 CXX test/cpp_headers/ioat_spec.o 00:05:03.594 CC examples/nvme/abort/abort.o 00:05:03.594 LINK ioat_ut 00:05:03.852 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:03.852 LINK blobfs_async_ut 00:05:03.852 CXX test/cpp_headers/string.o 00:05:04.109 CXX test/cpp_headers/ublk.o 00:05:04.109 LINK pmr_persistence 00:05:04.109 CXX test/cpp_headers/bit_array.o 00:05:04.109 LINK abort 00:05:04.109 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:05:04.367 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:05:04.367 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:05:04.367 CXX test/cpp_headers/scheduler.o 00:05:04.367 LINK reactor_ut 00:05:04.367 CC test/nvme/startup/startup.o 00:05:04.625 CXX test/cpp_headers/blob.o 00:05:04.625 LINK startup 00:05:04.625 CXX test/cpp_headers/gpt_spec.o 00:05:04.625 LINK init_grp_ut 00:05:04.883 CXX test/cpp_headers/sock.o 00:05:04.883 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:05:04.883 CXX test/cpp_headers/vmd.o 00:05:05.141 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:05:05.141 CXX test/cpp_headers/rpc.o 00:05:05.141 CC test/unit/lib/iscsi/param.c/param_ut.o 00:05:05.141 CXX test/cpp_headers/accel_module.o 00:05:05.399 CC examples/vmd/lsvmd/lsvmd.o 00:05:05.399 CXX test/cpp_headers/bit_pool.o 00:05:05.399 LINK conn_ut 00:05:05.657 LINK lsvmd 00:05:05.657 CC test/nvme/reserve/reserve.o 00:05:05.657 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:05:05.657 LINK param_ut 00:05:05.657 LINK blobfs_sync_ut 00:05:05.915 CC test/nvme/simple_copy/simple_copy.o 00:05:05.915 LINK reserve 00:05:05.915 CC test/nvme/connect_stress/connect_stress.o 00:05:06.174 LINK simple_copy 00:05:06.174 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:05:06.174 LINK connect_stress 00:05:06.432 LINK portal_grp_ut 00:05:06.432 LINK blobfs_bdev_ut 00:05:06.774 CC test/nvme/boot_partition/boot_partition.o 00:05:06.774 CC test/nvme/compliance/nvme_compliance.o 00:05:06.774 CC examples/vmd/led/led.o 00:05:06.774 LINK boot_partition 00:05:07.033 LINK led 00:05:07.033 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:05:07.033 LINK nvme_compliance 00:05:07.033 LINK bdev_ut 00:05:07.290 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:05:07.290 CC test/nvme/fused_ordering/fused_ordering.o 00:05:07.548 LINK json_parse_ut 00:05:07.548 CC test/unit/lib/bdev/part.c/part_ut.o 00:05:07.548 LINK fused_ordering 00:05:07.806 LINK iscsi_ut 00:05:07.806 LINK jsonrpc_server_ut 00:05:07.806 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:05:07.806 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:07.806 LINK tgt_node_ut 00:05:08.064 LINK doorbell_aers 00:05:08.064 CC test/nvme/fdp/fdp.o 00:05:08.321 CC test/unit/lib/log/log.c/log_ut.o 00:05:08.321 CC examples/nvmf/nvmf/nvmf.o 00:05:08.321 CC test/nvme/cuse/cuse.o 00:05:08.321 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:05:08.580 LINK log_ut 00:05:08.580 LINK json_util_ut 00:05:08.580 LINK fdp 00:05:08.580 LINK nvmf 00:05:08.837 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:05:08.837 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:05:08.837 CC test/unit/lib/notify/notify.c/notify_ut.o 00:05:09.096 LINK notify_ut 00:05:09.096 LINK blob_ut 00:05:09.096 LINK cuse 00:05:09.353 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:05:09.354 CC examples/util/zipf/zipf.o 00:05:09.354 LINK json_write_ut 00:05:09.611 CC examples/thread/thread/thread_ex.o 00:05:09.611 LINK zipf 00:05:09.611 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:05:09.611 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:05:09.868 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:05:09.868 LINK scsi_nvme_ut 00:05:09.868 LINK thread 00:05:10.126 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:05:10.126 LINK gpt_ut 00:05:10.126 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:05:10.126 LINK nvme_ut 00:05:10.384 LINK lvol_ut 00:05:10.384 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:05:10.642 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:05:10.642 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:05:10.642 LINK dev_ut 00:05:10.901 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:05:11.467 LINK nvme_ctrlr_ocssd_cmd_ut 00:05:11.467 LINK nvme_ctrlr_cmd_ut 00:05:11.467 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:05:11.467 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:05:11.725 LINK part_ut 00:05:11.725 LINK vbdev_lvol_ut 00:05:11.725 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:05:11.983 LINK scsi_ut 00:05:11.983 LINK lun_ut 00:05:11.983 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:05:11.983 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:05:11.983 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:05:12.242 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:05:12.808 LINK bdev_raid_sb_ut 00:05:12.808 CC examples/idxd/perf/perf.o 00:05:12.808 LINK scsi_bdev_ut 00:05:12.808 LINK nvme_ns_ut 00:05:12.808 LINK nvme_ctrlr_ut 00:05:13.065 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:05:13.065 LINK idxd_perf 00:05:13.065 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:05:13.065 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:13.065 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:05:13.323 LINK interrupt_tgt 00:05:13.581 LINK nvme_ns_ocssd_cmd_ut 00:05:13.581 LINK scsi_pr_ut 00:05:13.909 LINK concat_ut 00:05:13.909 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:05:13.909 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:05:13.909 LINK nvme_ns_cmd_ut 00:05:13.909 LINK bdev_raid_ut 00:05:13.909 LINK tcp_ut 00:05:13.909 CC test/unit/lib/sock/sock.c/sock_ut.o 00:05:13.909 CC test/unit/lib/sock/posix.c/posix_ut.o 00:05:14.166 LINK bdev_zone_ut 00:05:14.166 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:05:14.166 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:05:14.166 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:05:14.423 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:05:14.681 LINK nvme_pcie_ut 00:05:14.681 LINK raid1_ut 00:05:14.681 LINK nvme_poll_group_ut 00:05:14.681 LINK bdev_ut 00:05:14.938 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:05:14.938 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:05:14.938 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:05:14.938 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:05:15.197 LINK vbdev_zone_block_ut 00:05:15.197 LINK posix_ut 00:05:15.197 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:05:15.456 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:05:15.456 LINK nvme_quirks_ut 00:05:15.456 LINK raid5f_ut 00:05:15.456 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:05:15.715 LINK sock_ut 00:05:15.715 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:05:15.973 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:05:15.973 CC test/unit/lib/thread/thread.c/thread_ut.o 00:05:16.232 LINK nvme_qpair_ut 00:05:16.232 LINK nvme_io_msg_ut 00:05:16.232 LINK nvme_transport_ut 00:05:16.490 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:05:16.748 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:05:16.748 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:05:16.748 LINK nvme_fabric_ut 00:05:17.006 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:05:17.326 LINK nvme_pcie_common_ut 00:05:17.326 LINK subsystem_ut 00:05:17.326 LINK iobuf_ut 00:05:17.583 CC test/unit/lib/util/base64.c/base64_ut.o 00:05:17.583 LINK nvme_opal_ut 00:05:17.583 LINK ctrlr_ut 00:05:17.583 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:05:17.842 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:05:17.842 LINK nvme_tcp_ut 00:05:17.842 LINK base64_ut 00:05:17.842 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:05:18.101 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:05:18.101 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:05:18.101 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:05:18.398 LINK pci_event_ut 00:05:18.398 LINK bit_array_ut 00:05:18.659 LINK subsystem_ut 00:05:18.659 LINK nvme_cuse_ut 00:05:18.659 LINK thread_ut 00:05:18.659 LINK rpc_ut 00:05:18.659 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:05:18.659 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:05:18.659 LINK ctrlr_bdev_ut 00:05:18.918 LINK nvme_rdma_ut 00:05:18.918 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:05:18.918 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:05:18.918 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:05:18.918 LINK cpuset_ut 00:05:19.176 CC test/unit/lib/rdma/common.c/common_ut.o 00:05:19.176 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:05:19.176 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:05:19.176 LINK rpc_ut 00:05:19.176 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:05:19.176 LINK keyring_ut 00:05:19.435 LINK idxd_user_ut 00:05:19.435 LINK crc16_ut 00:05:19.435 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:05:19.435 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:05:19.694 LINK common_ut 00:05:19.694 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:05:19.694 LINK ctrlr_discovery_ut 00:05:19.694 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:05:19.694 LINK ftl_l2p_ut 00:05:19.952 LINK crc32_ieee_ut 00:05:19.952 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:05:19.952 LINK idxd_ut 00:05:19.952 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:05:19.952 LINK crc32c_ut 00:05:20.212 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:05:20.212 CC test/unit/lib/util/dif.c/dif_ut.o 00:05:20.212 LINK crc64_ut 00:05:20.212 CC test/unit/lib/util/iov.c/iov_ut.o 00:05:20.212 LINK bdev_nvme_ut 00:05:20.212 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:05:20.471 LINK nvmf_ut 00:05:20.471 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:05:20.732 LINK iov_ut 00:05:20.732 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:05:20.732 CC test/unit/lib/util/math.c/math_ut.o 00:05:20.732 LINK ftl_bitmap_ut 00:05:20.989 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:05:20.989 LINK math_ut 00:05:20.989 LINK ftl_io_ut 00:05:20.989 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:05:21.247 LINK vhost_ut 00:05:21.247 LINK ftl_mempool_ut 00:05:21.247 CC test/unit/lib/util/string.c/string_ut.o 00:05:21.247 LINK ftl_band_ut 00:05:21.247 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:05:21.504 LINK dif_ut 00:05:21.504 LINK pipe_ut 00:05:21.504 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:05:21.504 CC test/unit/lib/util/xor.c/xor_ut.o 00:05:21.504 LINK string_ut 00:05:21.763 LINK ftl_mngt_ut 00:05:21.763 LINK xor_ut 00:05:23.136 LINK ftl_sb_ut 00:05:23.136 LINK ftl_layout_upgrade_ut 00:05:23.136 LINK transport_ut 00:05:23.395 LINK rdma_ut 00:05:23.961 00:05:23.961 real 1m57.665s 00:05:23.961 user 10m6.543s 00:05:23.961 sys 1m49.598s 00:05:23.961 ************************************ 00:05:23.961 12:49:27 -- common/autotest_common.sh@1100 -- $ xtrace_disable 00:05:23.961 12:49:27 -- common/autotest_common.sh@10 -- $ set +x 00:05:23.961 END TEST unittest_build 00:05:23.961 ************************************ 00:05:23.961 12:49:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:23.961 12:49:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:05:23.961 12:49:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:05:23.961 12:49:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.961 12:49:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:23.961 12:49:27 -- pm/common@45 -- $ pid=2358 00:05:23.961 12:49:27 -- pm/common@52 -- $ sudo kill -TERM 2358 00:05:23.961 12:49:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.961 12:49:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:23.961 12:49:27 -- pm/common@45 -- $ pid=2359 00:05:23.961 12:49:27 -- pm/common@52 -- $ sudo kill -TERM 2359 00:05:23.961 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:05:23.961 12:49:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.961 12:49:28 -- nvmf/common.sh@7 -- # uname -s 00:05:23.961 12:49:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.961 12:49:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.961 12:49:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.961 12:49:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.961 12:49:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.961 12:49:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.961 12:49:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.961 12:49:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.961 12:49:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.961 12:49:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.962 12:49:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e2ba70dc-0a1b-4676-ac77-8011cf274127 00:05:23.962 12:49:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=e2ba70dc-0a1b-4676-ac77-8011cf274127 00:05:23.962 12:49:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.962 12:49:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.962 12:49:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.962 12:49:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.962 12:49:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.962 12:49:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.962 12:49:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.962 12:49:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.962 12:49:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:23.962 12:49:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:23.962 12:49:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:23.962 12:49:28 -- paths/export.sh@5 -- # export PATH 00:05:23.962 12:49:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:23.962 12:49:28 -- nvmf/common.sh@47 -- # : 0 00:05:23.962 12:49:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.962 12:49:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.962 12:49:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.962 12:49:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.962 12:49:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.962 12:49:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.962 12:49:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.962 12:49:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.962 12:49:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:23.962 12:49:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:23.962 12:49:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:23.962 12:49:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:05:23.962 12:49:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:23.962 12:49:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:23.962 12:49:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:23.962 12:49:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:24.528 12:49:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:24.528 12:49:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:05:24.528 12:49:28 -- spdk/autotest.sh@48 -- # udevadm_pid=98141 00:05:24.528 12:49:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:24.528 12:49:28 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:05:24.528 12:49:28 -- pm/common@17 -- # local monitor 00:05:24.528 12:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.528 12:49:28 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=98143 00:05:24.528 12:49:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.528 12:49:28 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=98148 00:05:24.528 12:49:28 -- pm/common@26 -- # sleep 1 00:05:24.528 12:49:28 -- pm/common@21 -- # date +%s 00:05:24.528 12:49:28 -- pm/common@21 -- # date +%s 00:05:24.528 12:49:28 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713358168 00:05:24.528 12:49:28 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713358168 00:05:24.528 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:05:24.528 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:05:24.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713358168_collect-vmstat.pm.log 00:05:24.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713358168_collect-cpu-load.pm.log 00:05:25.462 12:49:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:25.462 12:49:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:25.462 12:49:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.462 12:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:25.462 12:49:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:25.462 12:49:29 -- common/autotest_common.sh@734 -- # xtrace_disable 00:05:25.462 12:49:29 -- common/autotest_common.sh@10 -- # set +x 00:05:25.719 12:49:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:25.719 12:49:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:25.719 12:49:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:25.719 12:49:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:25.719 12:49:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:25.719 12:49:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:25.719 12:49:29 -- common/autotest_common.sh@1429 -- # uname 00:05:25.719 12:49:29 -- common/autotest_common.sh@1429 -- # '[' Linux = FreeBSD ']' 00:05:25.719 12:49:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:25.719 12:49:29 -- common/autotest_common.sh@1449 -- # uname 00:05:25.719 12:49:29 -- common/autotest_common.sh@1449 -- # [[ Linux = FreeBSD ]] 00:05:25.719 12:49:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:25.719 12:49:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:25.719 12:49:29 -- spdk/autotest.sh@72 -- # hash lcov 00:05:25.719 12:49:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:25.719 12:49:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:25.719 --rc lcov_branch_coverage=1 00:05:25.719 --rc lcov_function_coverage=1 00:05:25.719 --rc genhtml_branch_coverage=1 00:05:25.719 --rc genhtml_function_coverage=1 00:05:25.719 --rc genhtml_legend=1 00:05:25.719 --rc geninfo_all_blocks=1 00:05:25.719 ' 00:05:25.719 12:49:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:25.719 --rc lcov_branch_coverage=1 00:05:25.719 --rc lcov_function_coverage=1 00:05:25.719 --rc genhtml_branch_coverage=1 00:05:25.719 --rc genhtml_function_coverage=1 00:05:25.719 --rc genhtml_legend=1 00:05:25.719 --rc geninfo_all_blocks=1 00:05:25.719 ' 00:05:25.719 12:49:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:25.719 --rc lcov_branch_coverage=1 00:05:25.719 --rc lcov_function_coverage=1 00:05:25.719 --rc genhtml_branch_coverage=1 00:05:25.719 --rc genhtml_function_coverage=1 00:05:25.719 --rc genhtml_legend=1 00:05:25.719 --rc geninfo_all_blocks=1 00:05:25.719 --no-external' 00:05:25.719 12:49:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:25.719 --rc lcov_branch_coverage=1 00:05:25.719 --rc lcov_function_coverage=1 00:05:25.719 --rc genhtml_branch_coverage=1 00:05:25.719 --rc genhtml_function_coverage=1 00:05:25.719 --rc genhtml_legend=1 00:05:25.719 --rc geninfo_all_blocks=1 00:05:25.719 --no-external' 00:05:25.719 12:49:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:25.719 lcov: LCOV version 1.15 00:05:25.720 12:49:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:27.619 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:27.619 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:27.878 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:27.878 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:28.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:28.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:28.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:28.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:28.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:28.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:28.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:14.847 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:14.847 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:14.847 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:14.847 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:14.847 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:14.847 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:16.222 12:50:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:16.222 12:50:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.222 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:06:16.222 12:50:19 -- spdk/autotest.sh@91 -- # rm -f 00:06:16.222 12:50:19 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:16.511 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:16.511 12:50:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:16.511 12:50:20 -- common/autotest_common.sh@1643 -- # zoned_devs=() 00:06:16.511 12:50:20 -- common/autotest_common.sh@1643 -- # local -gA zoned_devs 00:06:16.511 12:50:20 -- common/autotest_common.sh@1644 -- # local nvme bdf 00:06:16.511 12:50:20 -- common/autotest_common.sh@1646 -- # for nvme in /sys/block/nvme* 00:06:16.511 12:50:20 -- common/autotest_common.sh@1647 -- # is_block_zoned nvme0n1 00:06:16.511 12:50:20 -- common/autotest_common.sh@1636 -- # local device=nvme0n1 00:06:16.511 12:50:20 -- common/autotest_common.sh@1638 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:16.511 12:50:20 -- common/autotest_common.sh@1639 -- # [[ none != none ]] 00:06:16.511 12:50:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:16.511 12:50:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:16.511 12:50:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:16.511 12:50:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:16.511 12:50:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:16.512 12:50:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:16.512 No valid GPT data, bailing 00:06:16.512 12:50:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:16.512 12:50:20 -- scripts/common.sh@391 -- # pt= 00:06:16.512 12:50:20 -- scripts/common.sh@392 -- # return 1 00:06:16.512 12:50:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:16.512 1+0 records in 00:06:16.512 1+0 records out 00:06:16.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267919 s, 39.1 MB/s 00:06:16.512 12:50:20 -- spdk/autotest.sh@118 -- # sync 00:06:16.512 12:50:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:16.512 12:50:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:16.512 12:50:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:17.916 12:50:21 -- spdk/autotest.sh@124 -- # uname -s 00:06:17.916 12:50:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:17.916 12:50:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:17.916 12:50:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:17.916 12:50:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:17.916 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:06:17.916 ************************************ 00:06:17.916 START TEST setup.sh 00:06:17.916 ************************************ 00:06:17.916 12:50:21 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:17.916 * Looking for test storage... 00:06:17.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:17.916 12:50:21 -- setup/test-setup.sh@10 -- # uname -s 00:06:17.916 12:50:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:17.916 12:50:21 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:17.916 12:50:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:17.916 12:50:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:17.916 12:50:21 -- common/autotest_common.sh@10 -- # set +x 00:06:17.916 ************************************ 00:06:17.916 START TEST acl 00:06:17.916 ************************************ 00:06:17.916 12:50:21 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:17.916 * Looking for test storage... 00:06:17.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:17.916 12:50:21 -- setup/acl.sh@10 -- # get_zoned_devs 00:06:17.916 12:50:21 -- common/autotest_common.sh@1643 -- # zoned_devs=() 00:06:17.916 12:50:21 -- common/autotest_common.sh@1643 -- # local -gA zoned_devs 00:06:17.916 12:50:21 -- common/autotest_common.sh@1644 -- # local nvme bdf 00:06:17.916 12:50:21 -- common/autotest_common.sh@1646 -- # for nvme in /sys/block/nvme* 00:06:17.916 12:50:21 -- common/autotest_common.sh@1647 -- # is_block_zoned nvme0n1 00:06:17.916 12:50:21 -- common/autotest_common.sh@1636 -- # local device=nvme0n1 00:06:17.916 12:50:21 -- common/autotest_common.sh@1638 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.916 12:50:21 -- common/autotest_common.sh@1639 -- # [[ none != none ]] 00:06:17.916 12:50:21 -- setup/acl.sh@12 -- # devs=() 00:06:17.916 12:50:21 -- setup/acl.sh@12 -- # declare -a devs 00:06:17.916 12:50:21 -- setup/acl.sh@13 -- # drivers=() 00:06:17.916 12:50:21 -- setup/acl.sh@13 -- # declare -A drivers 00:06:17.916 12:50:21 -- setup/acl.sh@51 -- # setup reset 00:06:17.916 12:50:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:17.916 12:50:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:18.482 12:50:22 -- setup/acl.sh@52 -- # collect_setup_devs 00:06:18.482 12:50:22 -- setup/acl.sh@16 -- # local dev driver 00:06:18.482 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.482 12:50:22 -- setup/acl.sh@15 -- # setup output status 00:06:18.482 12:50:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.482 12:50:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # continue 00:06:18.740 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.740 Hugepages 00:06:18.740 node hugesize free / total 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # continue 00:06:18.740 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.740 00:06:18.740 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # continue 00:06:18.740 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.740 12:50:22 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:18.740 12:50:22 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:18.740 12:50:22 -- setup/acl.sh@20 -- # continue 00:06:18.740 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.999 12:50:22 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:18.999 12:50:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:18.999 12:50:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:18.999 12:50:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:18.999 12:50:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:18.999 12:50:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:18.999 12:50:22 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:18.999 12:50:22 -- setup/acl.sh@54 -- # run_test denied denied 00:06:18.999 12:50:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.999 12:50:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.999 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:06:18.999 ************************************ 00:06:18.999 START TEST denied 00:06:18.999 ************************************ 00:06:18.999 12:50:22 -- common/autotest_common.sh@1099 -- # denied 00:06:18.999 12:50:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:18.999 12:50:22 -- setup/acl.sh@38 -- # setup output config 00:06:18.999 12:50:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:18.999 12:50:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:18.999 12:50:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:20.374 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:20.374 12:50:24 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:20.374 12:50:24 -- setup/acl.sh@28 -- # local dev driver 00:06:20.374 12:50:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:06:20.374 12:50:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:20.374 12:50:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:20.375 12:50:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:20.375 12:50:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:20.375 12:50:24 -- setup/acl.sh@41 -- # setup reset 00:06:20.375 12:50:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:20.375 12:50:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:20.940 00:06:20.940 real 0m1.833s 00:06:20.940 user 0m0.514s 00:06:20.940 sys 0m1.374s 00:06:20.940 12:50:24 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:20.940 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:06:20.940 ************************************ 00:06:20.940 END TEST denied 00:06:20.940 ************************************ 00:06:20.940 12:50:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:20.940 12:50:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:20.940 12:50:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.940 12:50:24 -- common/autotest_common.sh@10 -- # set +x 00:06:20.940 ************************************ 00:06:20.940 START TEST allowed 00:06:20.940 ************************************ 00:06:20.940 12:50:24 -- common/autotest_common.sh@1099 -- # allowed 00:06:20.940 12:50:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:20.940 12:50:24 -- setup/acl.sh@45 -- # setup output config 00:06:20.940 12:50:24 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:20.940 12:50:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.940 12:50:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:22.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.319 12:50:26 -- setup/acl.sh@47 -- # verify 00:06:22.319 12:50:26 -- setup/acl.sh@28 -- # local dev driver 00:06:22.319 12:50:26 -- setup/acl.sh@48 -- # setup reset 00:06:22.319 12:50:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:22.319 12:50:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:22.886 00:06:22.886 real 0m1.939s 00:06:22.886 user 0m0.477s 00:06:22.886 sys 0m1.449s 00:06:22.886 12:50:26 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:22.886 ************************************ 00:06:22.887 END TEST allowed 00:06:22.887 ************************************ 00:06:22.887 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:06:22.887 00:06:22.887 real 0m5.017s 00:06:22.887 user 0m1.712s 00:06:22.887 sys 0m3.386s 00:06:22.887 12:50:26 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:22.887 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:06:22.887 ************************************ 00:06:22.887 END TEST acl 00:06:22.887 ************************************ 00:06:22.887 12:50:26 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:22.887 12:50:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:22.887 12:50:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:22.887 12:50:26 -- common/autotest_common.sh@10 -- # set +x 00:06:22.887 ************************************ 00:06:22.887 START TEST hugepages 00:06:22.887 ************************************ 00:06:22.887 12:50:26 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:22.887 * Looking for test storage... 00:06:23.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:23.161 12:50:27 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:23.161 12:50:27 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:23.161 12:50:27 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:23.161 12:50:27 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:23.161 12:50:27 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:23.161 12:50:27 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:23.162 12:50:27 -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:23.162 12:50:27 -- setup/common.sh@18 -- # local node= 00:06:23.162 12:50:27 -- setup/common.sh@19 -- # local var val 00:06:23.162 12:50:27 -- setup/common.sh@20 -- # local mem_f mem 00:06:23.162 12:50:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:23.162 12:50:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:23.162 12:50:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:23.162 12:50:27 -- setup/common.sh@28 -- # mapfile -t mem 00:06:23.162 12:50:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 3034692 kB' 'MemAvailable: 7409896 kB' 'Buffers: 37648 kB' 'Cached: 4462264 kB' 'SwapCached: 0 kB' 'Active: 1213624 kB' 'Inactive: 3412024 kB' 'Active(anon): 134896 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1078728 kB' 'Inactive(file): 3410232 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 736 kB' 'Writeback: 0 kB' 'AnonPages: 144860 kB' 'Mapped: 73964 kB' 'Shmem: 2620 kB' 'KReclaimable: 208108 kB' 'Slab: 298856 kB' 'SReclaimable: 208108 kB' 'SUnreclaim: 90748 kB' 'KernelStack: 4688 kB' 'PageTables: 4060 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028400 kB' 'Committed_AS: 628732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.162 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.162 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # continue 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # IFS=': ' 00:06:23.163 12:50:27 -- setup/common.sh@31 -- # read -r var val _ 00:06:23.163 12:50:27 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:23.163 12:50:27 -- setup/common.sh@33 -- # echo 2048 00:06:23.163 12:50:27 -- setup/common.sh@33 -- # return 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:23.163 12:50:27 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:23.163 12:50:27 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:23.163 12:50:27 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:23.163 12:50:27 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:23.163 12:50:27 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:23.163 12:50:27 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:23.163 12:50:27 -- setup/hugepages.sh@207 -- # get_nodes 00:06:23.163 12:50:27 -- setup/hugepages.sh@27 -- # local node 00:06:23.163 12:50:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:23.163 12:50:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:23.163 12:50:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:23.163 12:50:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:23.163 12:50:27 -- setup/hugepages.sh@208 -- # clear_hp 00:06:23.163 12:50:27 -- setup/hugepages.sh@37 -- # local node hp 00:06:23.163 12:50:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:23.163 12:50:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:23.163 12:50:27 -- setup/hugepages.sh@41 -- # echo 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:23.163 12:50:27 -- setup/hugepages.sh@41 -- # echo 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:23.163 12:50:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:23.163 12:50:27 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:23.163 12:50:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:23.163 12:50:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:23.163 12:50:27 -- common/autotest_common.sh@10 -- # set +x 00:06:23.163 ************************************ 00:06:23.163 START TEST default_setup 00:06:23.163 ************************************ 00:06:23.163 12:50:27 -- common/autotest_common.sh@1099 -- # default_setup 00:06:23.163 12:50:27 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:23.163 12:50:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:23.163 12:50:27 -- setup/hugepages.sh@51 -- # shift 00:06:23.163 12:50:27 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:06:23.163 12:50:27 -- setup/hugepages.sh@52 -- # local node_ids 00:06:23.163 12:50:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:23.163 12:50:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:23.163 12:50:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:23.163 12:50:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:23.163 12:50:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:23.163 12:50:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:23.163 12:50:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:23.163 12:50:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:23.163 12:50:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:23.163 12:50:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:23.163 12:50:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:23.163 12:50:27 -- setup/hugepages.sh@73 -- # return 0 00:06:23.163 12:50:27 -- setup/hugepages.sh@137 -- # setup output 00:06:23.163 12:50:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.163 12:50:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:23.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:23.679 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.249 12:50:28 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:24.249 12:50:28 -- setup/hugepages.sh@89 -- # local node 00:06:24.249 12:50:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:24.249 12:50:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:24.249 12:50:28 -- setup/hugepages.sh@92 -- # local surp 00:06:24.249 12:50:28 -- setup/hugepages.sh@93 -- # local resv 00:06:24.249 12:50:28 -- setup/hugepages.sh@94 -- # local anon 00:06:24.249 12:50:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:24.249 12:50:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:24.249 12:50:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:24.249 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.249 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.249 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.249 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.249 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.249 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.249 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.249 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5130388 kB' 'MemAvailable: 9505772 kB' 'Buffers: 37648 kB' 'Cached: 4462260 kB' 'SwapCached: 0 kB' 'Active: 1220616 kB' 'Inactive: 3412220 kB' 'Active(anon): 141964 kB' 'Inactive(anon): 1784 kB' 'Active(file): 1078652 kB' 'Inactive(file): 3410436 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 788 kB' 'Writeback: 0 kB' 'AnonPages: 150824 kB' 'Mapped: 73600 kB' 'Shmem: 2616 kB' 'KReclaimable: 208160 kB' 'Slab: 298768 kB' 'SReclaimable: 208160 kB' 'SUnreclaim: 90608 kB' 'KernelStack: 4520 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 642944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.249 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.249 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.250 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.250 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.250 12:50:28 -- setup/hugepages.sh@97 -- # anon=0 00:06:24.250 12:50:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:24.250 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.250 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.250 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.250 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.250 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.250 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.250 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.250 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.250 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5129992 kB' 'MemAvailable: 9505456 kB' 'Buffers: 37648 kB' 'Cached: 4462276 kB' 'SwapCached: 0 kB' 'Active: 1221476 kB' 'Inactive: 3412220 kB' 'Active(anon): 142824 kB' 'Inactive(anon): 1780 kB' 'Active(file): 1078652 kB' 'Inactive(file): 3410440 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1012 kB' 'Writeback: 0 kB' 'AnonPages: 151720 kB' 'Mapped: 73656 kB' 'Shmem: 2616 kB' 'KReclaimable: 208236 kB' 'Slab: 299072 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 90836 kB' 'KernelStack: 4456 kB' 'PageTables: 3652 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 647844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.250 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.250 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.251 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.251 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.251 12:50:28 -- setup/hugepages.sh@99 -- # surp=0 00:06:24.251 12:50:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:24.251 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:24.251 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.251 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.251 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.251 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.251 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.251 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.251 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.251 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5129896 kB' 'MemAvailable: 9505432 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221548 kB' 'Inactive: 3412244 kB' 'Active(anon): 142896 kB' 'Inactive(anon): 1780 kB' 'Active(file): 1078652 kB' 'Inactive(file): 3410464 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1012 kB' 'Writeback: 0 kB' 'AnonPages: 151892 kB' 'Mapped: 73636 kB' 'Shmem: 2616 kB' 'KReclaimable: 208284 kB' 'Slab: 299296 kB' 'SReclaimable: 208284 kB' 'SUnreclaim: 91012 kB' 'KernelStack: 4504 kB' 'PageTables: 3728 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 647844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.251 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.251 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.252 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.252 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.253 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.253 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.253 nr_hugepages=1024 00:06:24.253 resv_hugepages=0 00:06:24.253 surplus_hugepages=0 00:06:24.253 anon_hugepages=0 00:06:24.253 12:50:28 -- setup/hugepages.sh@100 -- # resv=0 00:06:24.253 12:50:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:24.253 12:50:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:24.253 12:50:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:24.253 12:50:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:24.253 12:50:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:24.253 12:50:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:24.253 12:50:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:24.253 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:24.253 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.253 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.253 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.253 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.253 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.253 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.253 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.253 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5129952 kB' 'MemAvailable: 9505536 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221708 kB' 'Inactive: 3412252 kB' 'Active(anon): 143056 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078652 kB' 'Inactive(file): 3410464 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'AnonPages: 151920 kB' 'Mapped: 73588 kB' 'Shmem: 2616 kB' 'KReclaimable: 208332 kB' 'Slab: 299400 kB' 'SReclaimable: 208332 kB' 'SUnreclaim: 91068 kB' 'KernelStack: 4636 kB' 'PageTables: 3804 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 652488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14388 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.253 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.253 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.254 12:50:28 -- setup/common.sh@33 -- # echo 1024 00:06:24.254 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.254 12:50:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:24.254 12:50:28 -- setup/hugepages.sh@112 -- # get_nodes 00:06:24.254 12:50:28 -- setup/hugepages.sh@27 -- # local node 00:06:24.254 12:50:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:24.254 12:50:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:24.254 12:50:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:24.254 12:50:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:24.254 12:50:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:24.254 12:50:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:24.254 12:50:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:24.254 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.254 12:50:28 -- setup/common.sh@18 -- # local node=0 00:06:24.254 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.254 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.254 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.254 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:24.254 12:50:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:24.254 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.254 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5129464 kB' 'MemUsed: 7121640 kB' 'Active: 1221700 kB' 'Inactive: 3412252 kB' 'Active(anon): 143048 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078652 kB' 'Inactive(file): 3410464 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 1016 kB' 'Writeback: 0 kB' 'FilePages: 4499936 kB' 'Mapped: 73588 kB' 'AnonPages: 152432 kB' 'Shmem: 2616 kB' 'KernelStack: 4636 kB' 'PageTables: 3884 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208316 kB' 'Slab: 299416 kB' 'SReclaimable: 208316 kB' 'SUnreclaim: 91100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.254 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.254 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.255 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.255 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.255 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.255 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.255 node0=1024 expecting 1024 00:06:24.255 ************************************ 00:06:24.255 END TEST default_setup 00:06:24.255 ************************************ 00:06:24.255 12:50:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:24.255 12:50:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:24.255 12:50:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:24.255 12:50:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:24.255 12:50:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:24.255 12:50:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:24.255 00:06:24.255 real 0m1.123s 00:06:24.255 user 0m0.304s 00:06:24.255 sys 0m0.756s 00:06:24.255 12:50:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:24.255 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:06:24.255 12:50:28 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:24.255 12:50:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:24.255 12:50:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:24.255 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:06:24.255 ************************************ 00:06:24.255 START TEST per_node_1G_alloc 00:06:24.255 ************************************ 00:06:24.255 12:50:28 -- common/autotest_common.sh@1099 -- # per_node_1G_alloc 00:06:24.255 12:50:28 -- setup/hugepages.sh@143 -- # local IFS=, 00:06:24.255 12:50:28 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:24.255 12:50:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:24.255 12:50:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:24.255 12:50:28 -- setup/hugepages.sh@51 -- # shift 00:06:24.255 12:50:28 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:06:24.255 12:50:28 -- setup/hugepages.sh@52 -- # local node_ids 00:06:24.255 12:50:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:24.255 12:50:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:24.255 12:50:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:24.255 12:50:28 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:24.255 12:50:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:24.255 12:50:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:24.255 12:50:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:24.255 12:50:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:24.255 12:50:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:24.255 12:50:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:24.255 12:50:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:24.255 12:50:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:24.255 12:50:28 -- setup/hugepages.sh@73 -- # return 0 00:06:24.255 12:50:28 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:24.255 12:50:28 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:24.255 12:50:28 -- setup/hugepages.sh@146 -- # setup output 00:06:24.255 12:50:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:24.255 12:50:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:24.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:24.514 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:24.775 12:50:28 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:24.775 12:50:28 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:24.775 12:50:28 -- setup/hugepages.sh@89 -- # local node 00:06:24.775 12:50:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:24.775 12:50:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:24.775 12:50:28 -- setup/hugepages.sh@92 -- # local surp 00:06:24.775 12:50:28 -- setup/hugepages.sh@93 -- # local resv 00:06:24.775 12:50:28 -- setup/hugepages.sh@94 -- # local anon 00:06:24.775 12:50:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:24.775 12:50:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:24.775 12:50:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:24.775 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.775 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.775 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.775 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.775 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.775 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.775 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.775 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.775 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.775 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6175336 kB' 'MemAvailable: 10550828 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221632 kB' 'Inactive: 3412248 kB' 'Active(anon): 142972 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078660 kB' 'Inactive(file): 3410460 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1072 kB' 'Writeback: 0 kB' 'AnonPages: 152372 kB' 'Mapped: 73756 kB' 'Shmem: 2616 kB' 'KReclaimable: 208236 kB' 'Slab: 299280 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 91044 kB' 'KernelStack: 4680 kB' 'PageTables: 4444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 644196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.776 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.776 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:24.777 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.777 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.777 12:50:28 -- setup/hugepages.sh@97 -- # anon=0 00:06:24.777 12:50:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:24.777 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:24.777 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.777 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.777 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.777 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.777 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.777 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.777 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.777 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6175596 kB' 'MemAvailable: 10551088 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221892 kB' 'Inactive: 3412248 kB' 'Active(anon): 143232 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078660 kB' 'Inactive(file): 3410460 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1072 kB' 'Writeback: 0 kB' 'AnonPages: 152504 kB' 'Mapped: 73756 kB' 'Shmem: 2616 kB' 'KReclaimable: 208236 kB' 'Slab: 299280 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 91044 kB' 'KernelStack: 4612 kB' 'PageTables: 4056 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 644196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.777 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.777 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:24.778 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.778 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.778 12:50:28 -- setup/hugepages.sh@99 -- # surp=0 00:06:24.778 12:50:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:24.778 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:24.778 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.778 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.778 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.778 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.778 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.778 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.778 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.778 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6175560 kB' 'MemAvailable: 10551052 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221828 kB' 'Inactive: 3412248 kB' 'Active(anon): 143168 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078660 kB' 'Inactive(file): 3410460 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1072 kB' 'Writeback: 0 kB' 'AnonPages: 152972 kB' 'Mapped: 73536 kB' 'Shmem: 2616 kB' 'KReclaimable: 208236 kB' 'Slab: 299120 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 90884 kB' 'KernelStack: 4564 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 654504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.778 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.778 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.779 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.779 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:24.779 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:24.779 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:24.779 nr_hugepages=512 00:06:24.779 resv_hugepages=0 00:06:24.779 surplus_hugepages=0 00:06:24.779 anon_hugepages=0 00:06:24.779 12:50:28 -- setup/hugepages.sh@100 -- # resv=0 00:06:24.779 12:50:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:24.779 12:50:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:24.779 12:50:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:24.779 12:50:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:24.779 12:50:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:24.780 12:50:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:24.780 12:50:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:24.780 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:24.780 12:50:28 -- setup/common.sh@18 -- # local node= 00:06:24.780 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:24.780 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:24.780 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:24.780 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:24.780 12:50:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:24.780 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:24.780 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6175616 kB' 'MemAvailable: 10551108 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221836 kB' 'Inactive: 3412248 kB' 'Active(anon): 143176 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078660 kB' 'Inactive(file): 3410460 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1072 kB' 'Writeback: 0 kB' 'AnonPages: 152800 kB' 'Mapped: 73488 kB' 'Shmem: 2616 kB' 'KReclaimable: 208236 kB' 'Slab: 299120 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 90884 kB' 'KernelStack: 4572 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 652524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:24.780 12:50:28 -- setup/common.sh@32 -- # continue 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:24.780 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.039 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.039 12:50:28 -- setup/common.sh@33 -- # echo 512 00:06:25.039 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:25.039 12:50:28 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:25.039 12:50:28 -- setup/hugepages.sh@112 -- # get_nodes 00:06:25.039 12:50:28 -- setup/hugepages.sh@27 -- # local node 00:06:25.039 12:50:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:25.039 12:50:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:25.039 12:50:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:25.039 12:50:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:25.039 12:50:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:25.039 12:50:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:25.039 12:50:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:25.039 12:50:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:25.039 12:50:28 -- setup/common.sh@18 -- # local node=0 00:06:25.039 12:50:28 -- setup/common.sh@19 -- # local var val 00:06:25.039 12:50:28 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.039 12:50:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.039 12:50:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:25.039 12:50:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:25.039 12:50:28 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.039 12:50:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.039 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6175876 kB' 'MemUsed: 6075228 kB' 'Active: 1221836 kB' 'Inactive: 3412248 kB' 'Active(anon): 143176 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078660 kB' 'Inactive(file): 3410460 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 1072 kB' 'Writeback: 0 kB' 'FilePages: 4499936 kB' 'Mapped: 73488 kB' 'AnonPages: 152544 kB' 'Shmem: 2616 kB' 'KernelStack: 4640 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208236 kB' 'Slab: 299120 kB' 'SReclaimable: 208236 kB' 'SUnreclaim: 90884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # continue 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.040 12:50:28 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.040 12:50:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.040 12:50:28 -- setup/common.sh@33 -- # echo 0 00:06:25.040 12:50:28 -- setup/common.sh@33 -- # return 0 00:06:25.040 node0=512 expecting 512 00:06:25.040 ************************************ 00:06:25.040 END TEST per_node_1G_alloc 00:06:25.040 ************************************ 00:06:25.040 12:50:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:25.040 12:50:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:25.040 12:50:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:25.040 12:50:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:25.040 12:50:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:25.040 12:50:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:25.040 00:06:25.040 real 0m0.634s 00:06:25.040 user 0m0.213s 00:06:25.040 sys 0m0.445s 00:06:25.040 12:50:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:25.040 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:06:25.040 12:50:28 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:25.040 12:50:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:25.040 12:50:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:25.040 12:50:28 -- common/autotest_common.sh@10 -- # set +x 00:06:25.040 ************************************ 00:06:25.040 START TEST even_2G_alloc 00:06:25.040 ************************************ 00:06:25.040 12:50:29 -- common/autotest_common.sh@1099 -- # even_2G_alloc 00:06:25.040 12:50:29 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:25.040 12:50:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:25.040 12:50:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:25.040 12:50:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:25.040 12:50:29 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:25.040 12:50:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:25.040 12:50:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:25.040 12:50:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:25.040 12:50:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:25.040 12:50:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:25.040 12:50:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:25.040 12:50:29 -- setup/hugepages.sh@83 -- # : 0 00:06:25.040 12:50:29 -- setup/hugepages.sh@84 -- # : 0 00:06:25.040 12:50:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:25.040 12:50:29 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:25.040 12:50:29 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:25.040 12:50:29 -- setup/hugepages.sh@153 -- # setup output 00:06:25.040 12:50:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.040 12:50:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:25.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:25.298 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:25.869 12:50:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:25.869 12:50:29 -- setup/hugepages.sh@89 -- # local node 00:06:25.869 12:50:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:25.869 12:50:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:25.869 12:50:29 -- setup/hugepages.sh@92 -- # local surp 00:06:25.869 12:50:29 -- setup/hugepages.sh@93 -- # local resv 00:06:25.869 12:50:29 -- setup/hugepages.sh@94 -- # local anon 00:06:25.869 12:50:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:25.869 12:50:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:25.869 12:50:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:25.869 12:50:29 -- setup/common.sh@18 -- # local node= 00:06:25.869 12:50:29 -- setup/common.sh@19 -- # local var val 00:06:25.869 12:50:29 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.869 12:50:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.869 12:50:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:25.869 12:50:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:25.869 12:50:29 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.869 12:50:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5126528 kB' 'MemAvailable: 9502024 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1221852 kB' 'Inactive: 3412240 kB' 'Active(anon): 143184 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078668 kB' 'Inactive(file): 3410452 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1120 kB' 'Writeback: 0 kB' 'AnonPages: 152996 kB' 'Mapped: 73664 kB' 'Shmem: 2616 kB' 'KReclaimable: 208240 kB' 'Slab: 299340 kB' 'SReclaimable: 208240 kB' 'SUnreclaim: 91100 kB' 'KernelStack: 4672 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 655588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14388 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.869 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.869 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:25.870 12:50:29 -- setup/common.sh@33 -- # echo 0 00:06:25.870 12:50:29 -- setup/common.sh@33 -- # return 0 00:06:25.870 12:50:29 -- setup/hugepages.sh@97 -- # anon=0 00:06:25.870 12:50:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:25.870 12:50:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:25.870 12:50:29 -- setup/common.sh@18 -- # local node= 00:06:25.870 12:50:29 -- setup/common.sh@19 -- # local var val 00:06:25.870 12:50:29 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.870 12:50:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.870 12:50:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:25.870 12:50:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:25.870 12:50:29 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.870 12:50:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5126528 kB' 'MemAvailable: 9502024 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1222372 kB' 'Inactive: 3412240 kB' 'Active(anon): 143704 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078668 kB' 'Inactive(file): 3410452 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1120 kB' 'Writeback: 0 kB' 'AnonPages: 152608 kB' 'Mapped: 73664 kB' 'Shmem: 2616 kB' 'KReclaimable: 208240 kB' 'Slab: 299340 kB' 'SReclaimable: 208240 kB' 'SUnreclaim: 91100 kB' 'KernelStack: 4672 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 660492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.870 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.870 12:50:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.871 12:50:29 -- setup/common.sh@33 -- # echo 0 00:06:25.871 12:50:29 -- setup/common.sh@33 -- # return 0 00:06:25.871 12:50:29 -- setup/hugepages.sh@99 -- # surp=0 00:06:25.871 12:50:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:25.871 12:50:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:25.871 12:50:29 -- setup/common.sh@18 -- # local node= 00:06:25.871 12:50:29 -- setup/common.sh@19 -- # local var val 00:06:25.871 12:50:29 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.871 12:50:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.871 12:50:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:25.871 12:50:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:25.871 12:50:29 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.871 12:50:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5126528 kB' 'MemAvailable: 9502024 kB' 'Buffers: 37648 kB' 'Cached: 4462288 kB' 'SwapCached: 0 kB' 'Active: 1222112 kB' 'Inactive: 3412240 kB' 'Active(anon): 143444 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078668 kB' 'Inactive(file): 3410452 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1120 kB' 'Writeback: 0 kB' 'AnonPages: 152480 kB' 'Mapped: 73664 kB' 'Shmem: 2616 kB' 'KReclaimable: 208240 kB' 'Slab: 299340 kB' 'SReclaimable: 208240 kB' 'SUnreclaim: 91100 kB' 'KernelStack: 4672 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 660492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14404 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.871 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.871 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.872 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.872 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:25.872 12:50:29 -- setup/common.sh@33 -- # echo 0 00:06:25.872 12:50:29 -- setup/common.sh@33 -- # return 0 00:06:25.872 12:50:29 -- setup/hugepages.sh@100 -- # resv=0 00:06:25.872 12:50:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:25.872 nr_hugepages=1024 00:06:25.872 resv_hugepages=0 00:06:25.872 12:50:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:25.872 12:50:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:25.872 surplus_hugepages=0 00:06:25.872 anon_hugepages=0 00:06:25.872 12:50:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:25.872 12:50:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:25.872 12:50:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:25.872 12:50:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:25.872 12:50:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:25.872 12:50:29 -- setup/common.sh@18 -- # local node= 00:06:25.872 12:50:29 -- setup/common.sh@19 -- # local var val 00:06:25.872 12:50:29 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.872 12:50:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.872 12:50:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:25.872 12:50:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:25.872 12:50:29 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.873 12:50:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5127080 kB' 'MemAvailable: 9502580 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221692 kB' 'Inactive: 3412240 kB' 'Active(anon): 143020 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078672 kB' 'Inactive(file): 3410452 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 1120 kB' 'Writeback: 0 kB' 'AnonPages: 152524 kB' 'Mapped: 73356 kB' 'Shmem: 2616 kB' 'KReclaimable: 208240 kB' 'Slab: 299340 kB' 'SReclaimable: 208240 kB' 'SUnreclaim: 91100 kB' 'KernelStack: 4608 kB' 'PageTables: 3780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 658528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.873 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.873 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:25.874 12:50:29 -- setup/common.sh@33 -- # echo 1024 00:06:25.874 12:50:29 -- setup/common.sh@33 -- # return 0 00:06:25.874 12:50:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:25.874 12:50:29 -- setup/hugepages.sh@112 -- # get_nodes 00:06:25.874 12:50:29 -- setup/hugepages.sh@27 -- # local node 00:06:25.874 12:50:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:25.874 12:50:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:25.874 12:50:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:25.874 12:50:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:25.874 12:50:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:25.874 12:50:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:25.874 12:50:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:25.874 12:50:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:25.874 12:50:29 -- setup/common.sh@18 -- # local node=0 00:06:25.874 12:50:29 -- setup/common.sh@19 -- # local var val 00:06:25.874 12:50:29 -- setup/common.sh@20 -- # local mem_f mem 00:06:25.874 12:50:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:25.874 12:50:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:25.874 12:50:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:25.874 12:50:29 -- setup/common.sh@28 -- # mapfile -t mem 00:06:25.874 12:50:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5127056 kB' 'MemUsed: 7124048 kB' 'Active: 1221828 kB' 'Inactive: 3412240 kB' 'Active(anon): 143156 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078672 kB' 'Inactive(file): 3410452 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 1120 kB' 'Writeback: 0 kB' 'FilePages: 4499940 kB' 'Mapped: 73356 kB' 'AnonPages: 152668 kB' 'Shmem: 2616 kB' 'KernelStack: 4708 kB' 'PageTables: 3820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208240 kB' 'Slab: 299340 kB' 'SReclaimable: 208240 kB' 'SUnreclaim: 91100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.874 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.874 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # continue 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # IFS=': ' 00:06:25.875 12:50:29 -- setup/common.sh@31 -- # read -r var val _ 00:06:25.875 12:50:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:25.875 12:50:29 -- setup/common.sh@33 -- # echo 0 00:06:25.875 12:50:29 -- setup/common.sh@33 -- # return 0 00:06:25.875 12:50:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:25.875 12:50:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:25.875 12:50:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:25.875 12:50:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:25.875 node0=1024 expecting 1024 00:06:25.875 12:50:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:25.875 00:06:25.875 real 0m0.857s 00:06:25.875 user 0m0.219s 00:06:25.875 sys 0m0.671s 00:06:25.875 12:50:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:25.875 ************************************ 00:06:25.875 END TEST even_2G_alloc 00:06:25.875 ************************************ 00:06:25.875 12:50:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.875 12:50:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:25.875 12:50:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:25.875 12:50:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:25.875 12:50:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.875 ************************************ 00:06:25.875 START TEST odd_alloc 00:06:25.875 ************************************ 00:06:25.875 12:50:29 -- common/autotest_common.sh@1099 -- # odd_alloc 00:06:25.875 12:50:29 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:25.875 12:50:29 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:25.875 12:50:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:25.875 12:50:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:25.875 12:50:29 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:25.875 12:50:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:25.875 12:50:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:25.875 12:50:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:25.875 12:50:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:25.875 12:50:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:25.875 12:50:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:25.875 12:50:29 -- setup/hugepages.sh@83 -- # : 0 00:06:25.875 12:50:29 -- setup/hugepages.sh@84 -- # : 0 00:06:25.875 12:50:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:25.875 12:50:29 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:25.875 12:50:29 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:25.875 12:50:29 -- setup/hugepages.sh@160 -- # setup output 00:06:25.875 12:50:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.875 12:50:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:26.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:26.158 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:26.727 12:50:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:26.727 12:50:30 -- setup/hugepages.sh@89 -- # local node 00:06:26.727 12:50:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:26.727 12:50:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:26.727 12:50:30 -- setup/hugepages.sh@92 -- # local surp 00:06:26.727 12:50:30 -- setup/hugepages.sh@93 -- # local resv 00:06:26.727 12:50:30 -- setup/hugepages.sh@94 -- # local anon 00:06:26.727 12:50:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:26.727 12:50:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:26.727 12:50:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:26.727 12:50:30 -- setup/common.sh@18 -- # local node= 00:06:26.727 12:50:30 -- setup/common.sh@19 -- # local var val 00:06:26.727 12:50:30 -- setup/common.sh@20 -- # local mem_f mem 00:06:26.727 12:50:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.727 12:50:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.727 12:50:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.727 12:50:30 -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.727 12:50:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.727 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.727 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.727 12:50:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5123596 kB' 'MemAvailable: 9499128 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1222012 kB' 'Inactive: 3412220 kB' 'Active(anon): 143320 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'AnonPages: 152404 kB' 'Mapped: 73668 kB' 'Shmem: 2616 kB' 'KReclaimable: 208272 kB' 'Slab: 299348 kB' 'SReclaimable: 208272 kB' 'SUnreclaim: 91076 kB' 'KernelStack: 4640 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 647320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:26.727 12:50:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.727 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.727 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.727 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.727 12:50:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.728 12:50:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:26.728 12:50:30 -- setup/common.sh@33 -- # echo 0 00:06:26.728 12:50:30 -- setup/common.sh@33 -- # return 0 00:06:26.728 12:50:30 -- setup/hugepages.sh@97 -- # anon=0 00:06:26.728 12:50:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:26.728 12:50:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.728 12:50:30 -- setup/common.sh@18 -- # local node= 00:06:26.728 12:50:30 -- setup/common.sh@19 -- # local var val 00:06:26.728 12:50:30 -- setup/common.sh@20 -- # local mem_f mem 00:06:26.728 12:50:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.728 12:50:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.728 12:50:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.728 12:50:30 -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.728 12:50:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.728 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5123572 kB' 'MemAvailable: 9499104 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1222212 kB' 'Inactive: 3412216 kB' 'Active(anon): 143516 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078696 kB' 'Inactive(file): 3410428 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 152572 kB' 'Mapped: 73456 kB' 'Shmem: 2616 kB' 'KReclaimable: 208272 kB' 'Slab: 299348 kB' 'SReclaimable: 208272 kB' 'SUnreclaim: 91076 kB' 'KernelStack: 4672 kB' 'PageTables: 3852 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 652996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.729 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.729 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.730 12:50:30 -- setup/common.sh@33 -- # echo 0 00:06:26.730 12:50:30 -- setup/common.sh@33 -- # return 0 00:06:26.730 12:50:30 -- setup/hugepages.sh@99 -- # surp=0 00:06:26.730 12:50:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:26.730 12:50:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:26.730 12:50:30 -- setup/common.sh@18 -- # local node= 00:06:26.730 12:50:30 -- setup/common.sh@19 -- # local var val 00:06:26.730 12:50:30 -- setup/common.sh@20 -- # local mem_f mem 00:06:26.730 12:50:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.730 12:50:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.730 12:50:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.730 12:50:30 -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.730 12:50:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5123612 kB' 'MemAvailable: 9499144 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221560 kB' 'Inactive: 3412216 kB' 'Active(anon): 142864 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078696 kB' 'Inactive(file): 3410428 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 152300 kB' 'Mapped: 73408 kB' 'Shmem: 2616 kB' 'KReclaimable: 208272 kB' 'Slab: 299348 kB' 'SReclaimable: 208272 kB' 'SUnreclaim: 91076 kB' 'KernelStack: 4572 kB' 'PageTables: 3832 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 652996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.730 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.730 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:26.731 12:50:30 -- setup/common.sh@33 -- # echo 0 00:06:26.731 12:50:30 -- setup/common.sh@33 -- # return 0 00:06:26.731 12:50:30 -- setup/hugepages.sh@100 -- # resv=0 00:06:26.731 nr_hugepages=1025 00:06:26.731 12:50:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:26.731 resv_hugepages=0 00:06:26.731 surplus_hugepages=0 00:06:26.731 12:50:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:26.731 12:50:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:26.731 anon_hugepages=0 00:06:26.731 12:50:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:26.731 12:50:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.731 12:50:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:26.731 12:50:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:26.731 12:50:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:26.731 12:50:30 -- setup/common.sh@18 -- # local node= 00:06:26.731 12:50:30 -- setup/common.sh@19 -- # local var val 00:06:26.731 12:50:30 -- setup/common.sh@20 -- # local mem_f mem 00:06:26.731 12:50:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.731 12:50:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:26.731 12:50:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:26.731 12:50:30 -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.731 12:50:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5123872 kB' 'MemAvailable: 9499404 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221820 kB' 'Inactive: 3412216 kB' 'Active(anon): 143124 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078696 kB' 'Inactive(file): 3410428 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 152432 kB' 'Mapped: 73408 kB' 'Shmem: 2616 kB' 'KReclaimable: 208272 kB' 'Slab: 299348 kB' 'SReclaimable: 208272 kB' 'SUnreclaim: 91076 kB' 'KernelStack: 4640 kB' 'PageTables: 3832 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075952 kB' 'Committed_AS: 657800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14388 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.731 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.731 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.732 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.732 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:26.733 12:50:30 -- setup/common.sh@33 -- # echo 1025 00:06:26.733 12:50:30 -- setup/common.sh@33 -- # return 0 00:06:26.733 12:50:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:26.733 12:50:30 -- setup/hugepages.sh@112 -- # get_nodes 00:06:26.733 12:50:30 -- setup/hugepages.sh@27 -- # local node 00:06:26.733 12:50:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:26.733 12:50:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:26.733 12:50:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:26.733 12:50:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:26.733 12:50:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:26.733 12:50:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:26.733 12:50:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:26.733 12:50:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:26.733 12:50:30 -- setup/common.sh@18 -- # local node=0 00:06:26.733 12:50:30 -- setup/common.sh@19 -- # local var val 00:06:26.733 12:50:30 -- setup/common.sh@20 -- # local mem_f mem 00:06:26.733 12:50:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:26.733 12:50:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:26.733 12:50:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:26.733 12:50:30 -- setup/common.sh@28 -- # mapfile -t mem 00:06:26.733 12:50:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5124148 kB' 'MemUsed: 7126956 kB' 'Active: 1221636 kB' 'Inactive: 3412216 kB' 'Active(anon): 142940 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078696 kB' 'Inactive(file): 3410428 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4499940 kB' 'Mapped: 73404 kB' 'AnonPages: 152412 kB' 'Shmem: 2616 kB' 'KernelStack: 4596 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208272 kB' 'Slab: 299360 kB' 'SReclaimable: 208272 kB' 'SUnreclaim: 91088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.733 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.733 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # continue 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # IFS=': ' 00:06:26.734 12:50:30 -- setup/common.sh@31 -- # read -r var val _ 00:06:26.734 12:50:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:26.734 12:50:30 -- setup/common.sh@33 -- # echo 0 00:06:26.734 12:50:30 -- setup/common.sh@33 -- # return 0 00:06:26.734 12:50:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:26.734 12:50:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:26.734 12:50:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:26.734 12:50:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:26.734 node0=1025 expecting 1025 00:06:26.734 12:50:30 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:26.734 12:50:30 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:26.734 00:06:26.734 real 0m0.915s 00:06:26.734 user 0m0.287s 00:06:26.734 sys 0m0.665s 00:06:26.734 12:50:30 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:26.734 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:26.734 ************************************ 00:06:26.734 END TEST odd_alloc 00:06:26.734 ************************************ 00:06:26.991 12:50:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:26.991 12:50:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:26.991 12:50:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:26.991 12:50:30 -- common/autotest_common.sh@10 -- # set +x 00:06:26.991 ************************************ 00:06:26.991 START TEST custom_alloc 00:06:26.991 ************************************ 00:06:26.991 12:50:30 -- common/autotest_common.sh@1099 -- # custom_alloc 00:06:26.991 12:50:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:26.991 12:50:30 -- setup/hugepages.sh@169 -- # local node 00:06:26.991 12:50:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:26.991 12:50:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:26.991 12:50:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:26.991 12:50:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:26.991 12:50:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:26.991 12:50:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:26.991 12:50:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:26.991 12:50:30 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:26.991 12:50:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.991 12:50:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:26.991 12:50:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:26.991 12:50:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.991 12:50:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.991 12:50:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:26.991 12:50:30 -- setup/hugepages.sh@83 -- # : 0 00:06:26.991 12:50:30 -- setup/hugepages.sh@84 -- # : 0 00:06:26.991 12:50:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:26.991 12:50:30 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:26.991 12:50:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:26.991 12:50:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:26.991 12:50:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:26.991 12:50:30 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:26.991 12:50:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:26.991 12:50:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:26.992 12:50:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:26.992 12:50:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:26.992 12:50:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:26.992 12:50:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:26.992 12:50:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:26.992 12:50:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:26.992 12:50:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:26.992 12:50:30 -- setup/hugepages.sh@78 -- # return 0 00:06:26.992 12:50:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:26.992 12:50:30 -- setup/hugepages.sh@187 -- # setup output 00:06:26.992 12:50:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:26.992 12:50:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:27.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:27.249 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:27.510 12:50:31 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:27.510 12:50:31 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:27.510 12:50:31 -- setup/hugepages.sh@89 -- # local node 00:06:27.510 12:50:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:27.510 12:50:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:27.510 12:50:31 -- setup/hugepages.sh@92 -- # local surp 00:06:27.510 12:50:31 -- setup/hugepages.sh@93 -- # local resv 00:06:27.510 12:50:31 -- setup/hugepages.sh@94 -- # local anon 00:06:27.510 12:50:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:27.510 12:50:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:27.510 12:50:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:27.510 12:50:31 -- setup/common.sh@18 -- # local node= 00:06:27.510 12:50:31 -- setup/common.sh@19 -- # local var val 00:06:27.510 12:50:31 -- setup/common.sh@20 -- # local mem_f mem 00:06:27.510 12:50:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:27.510 12:50:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:27.510 12:50:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:27.510 12:50:31 -- setup/common.sh@28 -- # mapfile -t mem 00:06:27.510 12:50:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6174356 kB' 'MemAvailable: 10549904 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221588 kB' 'Inactive: 3412220 kB' 'Active(anon): 142896 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 152324 kB' 'Mapped: 73436 kB' 'Shmem: 2616 kB' 'KReclaimable: 208288 kB' 'Slab: 299140 kB' 'SReclaimable: 208288 kB' 'SUnreclaim: 90852 kB' 'KernelStack: 4608 kB' 'PageTables: 3780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 653928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.510 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.510 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:27.511 12:50:31 -- setup/common.sh@33 -- # echo 0 00:06:27.511 12:50:31 -- setup/common.sh@33 -- # return 0 00:06:27.511 12:50:31 -- setup/hugepages.sh@97 -- # anon=0 00:06:27.511 12:50:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:27.511 12:50:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:27.511 12:50:31 -- setup/common.sh@18 -- # local node= 00:06:27.511 12:50:31 -- setup/common.sh@19 -- # local var val 00:06:27.511 12:50:31 -- setup/common.sh@20 -- # local mem_f mem 00:06:27.511 12:50:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:27.511 12:50:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:27.511 12:50:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:27.511 12:50:31 -- setup/common.sh@28 -- # mapfile -t mem 00:06:27.511 12:50:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6174616 kB' 'MemAvailable: 10550164 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221588 kB' 'Inactive: 3412220 kB' 'Active(anon): 142896 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 152324 kB' 'Mapped: 73436 kB' 'Shmem: 2616 kB' 'KReclaimable: 208288 kB' 'Slab: 299140 kB' 'SReclaimable: 208288 kB' 'SUnreclaim: 90852 kB' 'KernelStack: 4608 kB' 'PageTables: 3780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 659600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.511 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.511 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.512 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.512 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.513 12:50:31 -- setup/common.sh@33 -- # echo 0 00:06:27.513 12:50:31 -- setup/common.sh@33 -- # return 0 00:06:27.513 12:50:31 -- setup/hugepages.sh@99 -- # surp=0 00:06:27.513 12:50:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:27.513 12:50:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:27.513 12:50:31 -- setup/common.sh@18 -- # local node= 00:06:27.513 12:50:31 -- setup/common.sh@19 -- # local var val 00:06:27.513 12:50:31 -- setup/common.sh@20 -- # local mem_f mem 00:06:27.513 12:50:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:27.513 12:50:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:27.513 12:50:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:27.513 12:50:31 -- setup/common.sh@28 -- # mapfile -t mem 00:06:27.513 12:50:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6174884 kB' 'MemAvailable: 10550432 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221816 kB' 'Inactive: 3412220 kB' 'Active(anon): 143124 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 152812 kB' 'Mapped: 73436 kB' 'Shmem: 2616 kB' 'KReclaimable: 208288 kB' 'Slab: 299140 kB' 'SReclaimable: 208288 kB' 'SUnreclaim: 90852 kB' 'KernelStack: 4592 kB' 'PageTables: 3756 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 659600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.513 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.513 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:27.514 12:50:31 -- setup/common.sh@33 -- # echo 0 00:06:27.514 12:50:31 -- setup/common.sh@33 -- # return 0 00:06:27.514 12:50:31 -- setup/hugepages.sh@100 -- # resv=0 00:06:27.514 nr_hugepages=512 00:06:27.514 resv_hugepages=0 00:06:27.514 12:50:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:27.514 12:50:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:27.514 surplus_hugepages=0 00:06:27.514 12:50:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:27.514 anon_hugepages=0 00:06:27.514 12:50:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:27.514 12:50:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:27.514 12:50:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:27.514 12:50:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:27.514 12:50:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:27.514 12:50:31 -- setup/common.sh@18 -- # local node= 00:06:27.514 12:50:31 -- setup/common.sh@19 -- # local var val 00:06:27.514 12:50:31 -- setup/common.sh@20 -- # local mem_f mem 00:06:27.514 12:50:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:27.514 12:50:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:27.514 12:50:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:27.514 12:50:31 -- setup/common.sh@28 -- # mapfile -t mem 00:06:27.514 12:50:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6174884 kB' 'MemAvailable: 10550432 kB' 'Buffers: 37648 kB' 'Cached: 4462292 kB' 'SwapCached: 0 kB' 'Active: 1221896 kB' 'Inactive: 3412220 kB' 'Active(anon): 143204 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'AnonPages: 152484 kB' 'Mapped: 73436 kB' 'Shmem: 2616 kB' 'KReclaimable: 208288 kB' 'Slab: 299140 kB' 'SReclaimable: 208288 kB' 'SUnreclaim: 90852 kB' 'KernelStack: 4628 kB' 'PageTables: 3700 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601264 kB' 'Committed_AS: 664404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.514 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.514 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.515 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.515 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:27.516 12:50:31 -- setup/common.sh@33 -- # echo 512 00:06:27.516 12:50:31 -- setup/common.sh@33 -- # return 0 00:06:27.516 12:50:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:27.516 12:50:31 -- setup/hugepages.sh@112 -- # get_nodes 00:06:27.516 12:50:31 -- setup/hugepages.sh@27 -- # local node 00:06:27.516 12:50:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:27.516 12:50:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:27.516 12:50:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:27.516 12:50:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:27.516 12:50:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:27.516 12:50:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:27.516 12:50:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:27.516 12:50:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:27.516 12:50:31 -- setup/common.sh@18 -- # local node=0 00:06:27.516 12:50:31 -- setup/common.sh@19 -- # local var val 00:06:27.516 12:50:31 -- setup/common.sh@20 -- # local mem_f mem 00:06:27.516 12:50:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:27.516 12:50:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:27.516 12:50:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:27.516 12:50:31 -- setup/common.sh@28 -- # mapfile -t mem 00:06:27.516 12:50:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 6174576 kB' 'MemUsed: 6076528 kB' 'Active: 1221476 kB' 'Inactive: 3412220 kB' 'Active(anon): 142784 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078692 kB' 'Inactive(file): 3410432 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 440 kB' 'Writeback: 0 kB' 'FilePages: 4499940 kB' 'Mapped: 73436 kB' 'AnonPages: 152300 kB' 'Shmem: 2616 kB' 'KernelStack: 4664 kB' 'PageTables: 3648 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208288 kB' 'Slab: 299140 kB' 'SReclaimable: 208288 kB' 'SUnreclaim: 90852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.516 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.516 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.517 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.517 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.517 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.517 12:50:31 -- setup/common.sh@32 -- # continue 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # IFS=': ' 00:06:27.517 12:50:31 -- setup/common.sh@31 -- # read -r var val _ 00:06:27.517 12:50:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:27.517 12:50:31 -- setup/common.sh@33 -- # echo 0 00:06:27.517 12:50:31 -- setup/common.sh@33 -- # return 0 00:06:27.517 12:50:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:27.517 12:50:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:27.517 12:50:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:27.517 12:50:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:27.517 12:50:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:27.517 node0=512 expecting 512 00:06:27.517 12:50:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:27.517 00:06:27.517 real 0m0.659s 00:06:27.517 user 0m0.252s 00:06:27.517 sys 0m0.442s 00:06:27.517 12:50:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:27.517 12:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.517 ************************************ 00:06:27.517 END TEST custom_alloc 00:06:27.517 ************************************ 00:06:27.517 12:50:31 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:27.517 12:50:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:27.517 12:50:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:27.517 12:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.775 ************************************ 00:06:27.775 START TEST no_shrink_alloc 00:06:27.775 ************************************ 00:06:27.775 12:50:31 -- common/autotest_common.sh@1099 -- # no_shrink_alloc 00:06:27.775 12:50:31 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:27.775 12:50:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:27.775 12:50:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:27.775 12:50:31 -- setup/hugepages.sh@51 -- # shift 00:06:27.775 12:50:31 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:06:27.775 12:50:31 -- setup/hugepages.sh@52 -- # local node_ids 00:06:27.775 12:50:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:27.775 12:50:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:27.775 12:50:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:27.775 12:50:31 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:06:27.775 12:50:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:27.775 12:50:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:27.775 12:50:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:27.775 12:50:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:27.775 12:50:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:27.775 12:50:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:27.775 12:50:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:27.775 12:50:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:27.775 12:50:31 -- setup/hugepages.sh@73 -- # return 0 00:06:27.775 12:50:31 -- setup/hugepages.sh@198 -- # setup output 00:06:27.775 12:50:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:27.775 12:50:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:28.033 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:28.602 12:50:32 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:28.602 12:50:32 -- setup/hugepages.sh@89 -- # local node 00:06:28.602 12:50:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:28.602 12:50:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:28.602 12:50:32 -- setup/hugepages.sh@92 -- # local surp 00:06:28.602 12:50:32 -- setup/hugepages.sh@93 -- # local resv 00:06:28.602 12:50:32 -- setup/hugepages.sh@94 -- # local anon 00:06:28.602 12:50:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:28.602 12:50:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:28.602 12:50:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:28.602 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.602 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.602 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.602 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.602 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.602 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.602 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.602 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5143660 kB' 'MemAvailable: 9519332 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1205496 kB' 'Inactive: 3412344 kB' 'Active(anon): 126792 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136432 kB' 'Mapped: 72576 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298352 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90076 kB' 'KernelStack: 4288 kB' 'PageTables: 2960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 614432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14148 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.603 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.603 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.603 12:50:32 -- setup/hugepages.sh@97 -- # anon=0 00:06:28.603 12:50:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:28.603 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.603 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.603 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.603 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.603 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.603 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.603 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.603 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.603 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5143920 kB' 'MemAvailable: 9519592 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1205568 kB' 'Inactive: 3412344 kB' 'Active(anon): 126864 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136244 kB' 'Mapped: 72576 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298352 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90076 kB' 'KernelStack: 4272 kB' 'PageTables: 2932 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 608424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.604 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.605 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.605 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.605 12:50:32 -- setup/hugepages.sh@99 -- # surp=0 00:06:28.605 12:50:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:28.605 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:28.605 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.605 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.605 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.605 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.605 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.605 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.605 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.605 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5143920 kB' 'MemAvailable: 9519592 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1205828 kB' 'Inactive: 3412344 kB' 'Active(anon): 127124 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136504 kB' 'Mapped: 72576 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298352 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90076 kB' 'KernelStack: 4272 kB' 'PageTables: 2932 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 608424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.606 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.606 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.606 12:50:32 -- setup/hugepages.sh@100 -- # resv=0 00:06:28.606 nr_hugepages=1024 00:06:28.606 resv_hugepages=0 00:06:28.606 12:50:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:28.606 12:50:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:28.606 surplus_hugepages=0 00:06:28.606 anon_hugepages=0 00:06:28.606 12:50:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:28.606 12:50:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:28.606 12:50:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.606 12:50:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:28.606 12:50:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:28.606 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:28.606 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.606 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.606 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.606 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.606 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.606 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.606 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.606 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144140 kB' 'MemAvailable: 9519812 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1205916 kB' 'Inactive: 3412344 kB' 'Active(anon): 127212 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136688 kB' 'Mapped: 72548 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298352 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90076 kB' 'KernelStack: 4340 kB' 'PageTables: 2916 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 613696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.608 12:50:32 -- setup/common.sh@33 -- # echo 1024 00:06:28.608 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.608 12:50:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.608 12:50:32 -- setup/hugepages.sh@112 -- # get_nodes 00:06:28.608 12:50:32 -- setup/hugepages.sh@27 -- # local node 00:06:28.608 12:50:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.608 12:50:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:28.608 12:50:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:28.608 12:50:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:28.608 12:50:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:28.608 12:50:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:28.608 12:50:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:28.608 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.608 12:50:32 -- setup/common.sh@18 -- # local node=0 00:06:28.608 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.608 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.608 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.608 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:28.608 12:50:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:28.608 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.608 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144392 kB' 'MemUsed: 7106712 kB' 'Active: 1206080 kB' 'Inactive: 3412344 kB' 'Active(anon): 127376 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'FilePages: 4500076 kB' 'Mapped: 72548 kB' 'AnonPages: 136852 kB' 'Shmem: 2616 kB' 'KernelStack: 4340 kB' 'PageTables: 2916 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208276 kB' 'Slab: 298352 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.609 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.609 12:50:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:28.609 12:50:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:28.609 12:50:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:28.609 12:50:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:28.609 node0=1024 expecting 1024 00:06:28.609 12:50:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:28.609 12:50:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:28.609 12:50:32 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:28.609 12:50:32 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:28.609 12:50:32 -- setup/hugepages.sh@202 -- # setup output 00:06:28.609 12:50:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.609 12:50:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:28.870 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:28.870 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:28.870 12:50:32 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:28.870 12:50:32 -- setup/hugepages.sh@89 -- # local node 00:06:28.870 12:50:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:28.870 12:50:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:28.870 12:50:32 -- setup/hugepages.sh@92 -- # local surp 00:06:28.870 12:50:32 -- setup/hugepages.sh@93 -- # local resv 00:06:28.870 12:50:32 -- setup/hugepages.sh@94 -- # local anon 00:06:28.870 12:50:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:28.870 12:50:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:28.870 12:50:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:28.870 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.870 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.870 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.870 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.870 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.870 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.870 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.870 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.870 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144320 kB' 'MemAvailable: 9519992 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1206664 kB' 'Inactive: 3412344 kB' 'Active(anon): 127960 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 137124 kB' 'Mapped: 73280 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298328 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90052 kB' 'KernelStack: 4488 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 603600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.871 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.871 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.872 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.872 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.872 12:50:32 -- setup/hugepages.sh@97 -- # anon=0 00:06:28.872 12:50:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:28.872 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.872 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.872 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.872 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.872 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.872 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.872 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.872 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.872 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144164 kB' 'MemAvailable: 9519836 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1206400 kB' 'Inactive: 3412344 kB' 'Active(anon): 127696 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 137176 kB' 'Mapped: 73260 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298328 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90052 kB' 'KernelStack: 4424 kB' 'PageTables: 3000 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 603600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.872 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.872 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.873 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.873 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.873 12:50:32 -- setup/hugepages.sh@99 -- # surp=0 00:06:28.873 12:50:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:28.873 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:28.873 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.873 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.873 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.873 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.873 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.873 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.873 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.873 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144164 kB' 'MemAvailable: 9519836 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1206400 kB' 'Inactive: 3412344 kB' 'Active(anon): 127696 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136660 kB' 'Mapped: 73260 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298328 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90052 kB' 'KernelStack: 4424 kB' 'PageTables: 3000 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 603600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14116 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.873 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.873 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.874 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.874 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.874 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.874 12:50:32 -- setup/hugepages.sh@100 -- # resv=0 00:06:28.874 nr_hugepages=1024 00:06:28.874 12:50:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:28.874 resv_hugepages=0 00:06:28.874 12:50:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:28.874 surplus_hugepages=0 00:06:28.874 12:50:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:28.874 anon_hugepages=0 00:06:28.874 12:50:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:28.874 12:50:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.874 12:50:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:28.874 12:50:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:28.874 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:28.874 12:50:32 -- setup/common.sh@18 -- # local node= 00:06:28.874 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.874 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.874 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.874 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.874 12:50:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.874 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.874 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.874 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144228 kB' 'MemAvailable: 9519900 kB' 'Buffers: 37656 kB' 'Cached: 4462420 kB' 'SwapCached: 0 kB' 'Active: 1206408 kB' 'Inactive: 3412344 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'AnonPages: 136708 kB' 'Mapped: 73052 kB' 'Shmem: 2616 kB' 'KReclaimable: 208276 kB' 'Slab: 298388 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90112 kB' 'KernelStack: 4380 kB' 'PageTables: 2780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076976 kB' 'Committed_AS: 614076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14132 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:06:28.874 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.875 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.875 12:50:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.876 12:50:32 -- setup/common.sh@33 -- # echo 1024 00:06:28.876 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.876 12:50:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.876 12:50:32 -- setup/hugepages.sh@112 -- # get_nodes 00:06:28.876 12:50:32 -- setup/hugepages.sh@27 -- # local node 00:06:28.876 12:50:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.876 12:50:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:28.876 12:50:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:28.876 12:50:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:28.876 12:50:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:28.876 12:50:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:28.876 12:50:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:28.876 12:50:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.876 12:50:32 -- setup/common.sh@18 -- # local node=0 00:06:28.876 12:50:32 -- setup/common.sh@19 -- # local var val 00:06:28.876 12:50:32 -- setup/common.sh@20 -- # local mem_f mem 00:06:28.876 12:50:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.876 12:50:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:28.876 12:50:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:28.876 12:50:32 -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.876 12:50:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251104 kB' 'MemFree: 5144236 kB' 'MemUsed: 7106868 kB' 'Active: 1206204 kB' 'Inactive: 3412344 kB' 'Active(anon): 127500 kB' 'Inactive(anon): 1788 kB' 'Active(file): 1078704 kB' 'Inactive(file): 3410556 kB' 'Unevictable: 18504 kB' 'Mlocked: 18504 kB' 'Dirty: 544 kB' 'Writeback: 0 kB' 'FilePages: 4500076 kB' 'Mapped: 73016 kB' 'AnonPages: 136548 kB' 'Shmem: 2616 kB' 'KernelStack: 4396 kB' 'PageTables: 2752 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 208276 kB' 'Slab: 298632 kB' 'SReclaimable: 208276 kB' 'SUnreclaim: 90356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.876 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.876 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # continue 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # IFS=': ' 00:06:28.877 12:50:32 -- setup/common.sh@31 -- # read -r var val _ 00:06:28.877 12:50:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.877 12:50:32 -- setup/common.sh@33 -- # echo 0 00:06:28.877 12:50:32 -- setup/common.sh@33 -- # return 0 00:06:28.877 12:50:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:28.877 12:50:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:28.877 12:50:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:28.877 12:50:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:28.877 12:50:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:28.877 node0=1024 expecting 1024 00:06:28.877 12:50:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:28.877 00:06:28.877 real 0m1.255s 00:06:28.877 user 0m0.436s 00:06:28.877 sys 0m0.880s 00:06:28.877 12:50:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:28.877 12:50:32 -- common/autotest_common.sh@10 -- # set +x 00:06:28.877 ************************************ 00:06:28.877 END TEST no_shrink_alloc 00:06:28.877 ************************************ 00:06:28.877 12:50:32 -- setup/hugepages.sh@217 -- # clear_hp 00:06:28.877 12:50:32 -- setup/hugepages.sh@37 -- # local node hp 00:06:28.877 12:50:32 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:28.877 12:50:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:28.877 12:50:32 -- setup/hugepages.sh@41 -- # echo 0 00:06:28.877 12:50:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:28.877 12:50:32 -- setup/hugepages.sh@41 -- # echo 0 00:06:28.877 12:50:32 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:28.877 12:50:32 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:28.877 00:06:28.877 real 0m6.033s 00:06:28.877 user 0m1.990s 00:06:28.877 sys 0m4.153s 00:06:28.877 12:50:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:28.877 ************************************ 00:06:28.877 END TEST hugepages 00:06:28.877 ************************************ 00:06:28.877 12:50:32 -- common/autotest_common.sh@10 -- # set +x 00:06:29.135 12:50:33 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:29.135 12:50:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:29.135 12:50:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:29.135 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.135 ************************************ 00:06:29.135 START TEST driver 00:06:29.135 ************************************ 00:06:29.135 12:50:33 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:29.135 * Looking for test storage... 00:06:29.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:29.135 12:50:33 -- setup/driver.sh@68 -- # setup reset 00:06:29.135 12:50:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:29.135 12:50:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:29.436 12:50:33 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:29.436 12:50:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:29.436 12:50:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:29.436 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.694 ************************************ 00:06:29.694 START TEST guess_driver 00:06:29.694 ************************************ 00:06:29.694 12:50:33 -- common/autotest_common.sh@1099 -- # guess_driver 00:06:29.694 12:50:33 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:29.694 12:50:33 -- setup/driver.sh@47 -- # local fail=0 00:06:29.694 12:50:33 -- setup/driver.sh@49 -- # pick_driver 00:06:29.694 12:50:33 -- setup/driver.sh@36 -- # vfio 00:06:29.694 12:50:33 -- setup/driver.sh@21 -- # local iommu_grups 00:06:29.694 12:50:33 -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:29.694 12:50:33 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:29.694 12:50:33 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:29.694 12:50:33 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:29.694 12:50:33 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:29.694 12:50:33 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:06:29.694 12:50:33 -- setup/driver.sh@32 -- # return 1 00:06:29.694 12:50:33 -- setup/driver.sh@38 -- # uio 00:06:29.694 12:50:33 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:06:29.694 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:06:29.694 12:50:33 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:29.694 Looking for driver=uio_pci_generic 00:06:29.694 12:50:33 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:29.694 12:50:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:29.694 12:50:33 -- setup/driver.sh@45 -- # setup output config 00:06:29.694 12:50:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:29.694 12:50:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:29.952 12:50:33 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:29.952 12:50:33 -- setup/driver.sh@58 -- # continue 00:06:29.952 12:50:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:29.952 12:50:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:29.952 12:50:34 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:29.952 12:50:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:31.326 12:50:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:31.326 12:50:35 -- setup/driver.sh@65 -- # setup reset 00:06:31.326 12:50:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:31.326 12:50:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:31.584 00:06:31.584 real 0m1.916s 00:06:31.584 user 0m0.427s 00:06:31.584 sys 0m1.464s 00:06:31.585 12:50:35 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:31.585 ************************************ 00:06:31.585 END TEST guess_driver 00:06:31.585 ************************************ 00:06:31.585 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.585 00:06:31.585 real 0m2.492s 00:06:31.585 user 0m0.722s 00:06:31.585 sys 0m1.762s 00:06:31.585 12:50:35 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:31.585 ************************************ 00:06:31.585 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.585 END TEST driver 00:06:31.585 ************************************ 00:06:31.585 12:50:35 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:31.585 12:50:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:31.585 12:50:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:31.585 12:50:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.585 ************************************ 00:06:31.585 START TEST devices 00:06:31.585 ************************************ 00:06:31.585 12:50:35 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:31.585 * Looking for test storage... 00:06:31.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:31.585 12:50:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:31.585 12:50:35 -- setup/devices.sh@192 -- # setup reset 00:06:31.585 12:50:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:31.585 12:50:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:32.151 12:50:36 -- setup/devices.sh@194 -- # get_zoned_devs 00:06:32.151 12:50:36 -- common/autotest_common.sh@1643 -- # zoned_devs=() 00:06:32.151 12:50:36 -- common/autotest_common.sh@1643 -- # local -gA zoned_devs 00:06:32.151 12:50:36 -- common/autotest_common.sh@1644 -- # local nvme bdf 00:06:32.151 12:50:36 -- common/autotest_common.sh@1646 -- # for nvme in /sys/block/nvme* 00:06:32.151 12:50:36 -- common/autotest_common.sh@1647 -- # is_block_zoned nvme0n1 00:06:32.151 12:50:36 -- common/autotest_common.sh@1636 -- # local device=nvme0n1 00:06:32.151 12:50:36 -- common/autotest_common.sh@1638 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:32.151 12:50:36 -- common/autotest_common.sh@1639 -- # [[ none != none ]] 00:06:32.151 12:50:36 -- setup/devices.sh@196 -- # blocks=() 00:06:32.151 12:50:36 -- setup/devices.sh@196 -- # declare -a blocks 00:06:32.151 12:50:36 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:32.151 12:50:36 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:32.151 12:50:36 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:32.151 12:50:36 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:32.151 12:50:36 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:32.151 12:50:36 -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:32.151 12:50:36 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:32.151 12:50:36 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:32.151 12:50:36 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:32.151 12:50:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:32.151 12:50:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:32.151 No valid GPT data, bailing 00:06:32.151 12:50:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:32.151 12:50:36 -- scripts/common.sh@391 -- # pt= 00:06:32.151 12:50:36 -- scripts/common.sh@392 -- # return 1 00:06:32.151 12:50:36 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:32.151 12:50:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:32.151 12:50:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:32.151 12:50:36 -- setup/common.sh@80 -- # echo 5368709120 00:06:32.151 12:50:36 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:32.151 12:50:36 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:32.151 12:50:36 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:32.151 12:50:36 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:32.151 12:50:36 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:32.151 12:50:36 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:32.151 12:50:36 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:32.151 12:50:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:32.151 12:50:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.151 ************************************ 00:06:32.151 START TEST nvme_mount 00:06:32.151 ************************************ 00:06:32.151 12:50:36 -- common/autotest_common.sh@1099 -- # nvme_mount 00:06:32.151 12:50:36 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:32.151 12:50:36 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:32.151 12:50:36 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:32.151 12:50:36 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:32.151 12:50:36 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:32.151 12:50:36 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:32.151 12:50:36 -- setup/common.sh@40 -- # local part_no=1 00:06:32.151 12:50:36 -- setup/common.sh@41 -- # local size=1073741824 00:06:32.151 12:50:36 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:32.151 12:50:36 -- setup/common.sh@44 -- # parts=() 00:06:32.151 12:50:36 -- setup/common.sh@44 -- # local parts 00:06:32.151 12:50:36 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:32.151 12:50:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:32.151 12:50:36 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:32.151 12:50:36 -- setup/common.sh@46 -- # (( part++ )) 00:06:32.151 12:50:36 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:32.151 12:50:36 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:32.151 12:50:36 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:32.151 12:50:36 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:33.526 Creating new GPT entries in memory. 00:06:33.526 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:33.526 other utilities. 00:06:33.526 12:50:37 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:33.526 12:50:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:33.526 12:50:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:33.526 12:50:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:33.526 12:50:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:34.461 Creating new GPT entries in memory. 00:06:34.461 The operation has completed successfully. 00:06:34.461 12:50:38 -- setup/common.sh@57 -- # (( part++ )) 00:06:34.461 12:50:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:34.461 12:50:38 -- setup/common.sh@62 -- # wait 102719 00:06:34.461 12:50:38 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:34.461 12:50:38 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:34.461 12:50:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:34.461 12:50:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:34.461 12:50:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:34.461 12:50:38 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:34.461 12:50:38 -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:34.461 12:50:38 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:34.461 12:50:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:34.461 12:50:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:34.461 12:50:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:34.461 12:50:38 -- setup/devices.sh@53 -- # local found=0 00:06:34.461 12:50:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:34.461 12:50:38 -- setup/devices.sh@56 -- # : 00:06:34.461 12:50:38 -- setup/devices.sh@59 -- # local pci status 00:06:34.461 12:50:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.461 12:50:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:34.461 12:50:38 -- setup/devices.sh@47 -- # setup output config 00:06:34.461 12:50:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:34.461 12:50:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:34.461 12:50:38 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:34.461 12:50:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:34.461 12:50:38 -- setup/devices.sh@63 -- # found=1 00:06:34.461 12:50:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.461 12:50:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:34.461 12:50:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:34.719 12:50:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:34.719 12:50:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.654 12:50:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:35.654 12:50:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:35.654 12:50:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:35.654 12:50:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:35.654 12:50:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:06:35.654 12:50:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:35.654 12:50:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:35.654 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:35.654 12:50:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:35.654 12:50:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:35.654 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:35.654 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:35.654 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:35.654 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:35.654 12:50:39 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:35.654 12:50:39 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:35.654 12:50:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:35.654 12:50:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:35.654 12:50:39 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:35.654 12:50:39 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:35.654 12:50:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:35.654 12:50:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:35.654 12:50:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:35.654 12:50:39 -- setup/devices.sh@53 -- # local found=0 00:06:35.654 12:50:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:35.654 12:50:39 -- setup/devices.sh@56 -- # : 00:06:35.654 12:50:39 -- setup/devices.sh@59 -- # local pci status 00:06:35.654 12:50:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.654 12:50:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:35.654 12:50:39 -- setup/devices.sh@47 -- # setup output config 00:06:35.654 12:50:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:35.654 12:50:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:35.913 12:50:39 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:35.913 12:50:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:35.913 12:50:39 -- setup/devices.sh@63 -- # found=1 00:06:35.913 12:50:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.913 12:50:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:35.913 12:50:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:35.913 12:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:35.913 12:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.296 12:50:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:37.296 12:50:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:37.296 12:50:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:37.296 12:50:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:37.296 12:50:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:37.296 12:50:41 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:37.296 12:50:41 -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:06:37.296 12:50:41 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:37.296 12:50:41 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:37.296 12:50:41 -- setup/devices.sh@50 -- # local mount_point= 00:06:37.296 12:50:41 -- setup/devices.sh@51 -- # local test_file= 00:06:37.296 12:50:41 -- setup/devices.sh@53 -- # local found=0 00:06:37.296 12:50:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:37.296 12:50:41 -- setup/devices.sh@59 -- # local pci status 00:06:37.296 12:50:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:37.296 12:50:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.296 12:50:41 -- setup/devices.sh@47 -- # setup output config 00:06:37.296 12:50:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:37.296 12:50:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:37.296 12:50:41 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:37.296 12:50:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:37.296 12:50:41 -- setup/devices.sh@63 -- # found=1 00:06:37.296 12:50:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.296 12:50:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:37.296 12:50:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:37.296 12:50:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:37.296 12:50:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:38.231 12:50:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:38.231 12:50:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:38.231 12:50:42 -- setup/devices.sh@68 -- # return 0 00:06:38.231 12:50:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:06:38.231 12:50:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:38.231 12:50:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:38.231 12:50:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:38.231 12:50:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:38.231 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:38.231 ************************************ 00:06:38.231 END TEST nvme_mount 00:06:38.231 ************************************ 00:06:38.231 00:06:38.231 real 0m6.160s 00:06:38.231 user 0m0.620s 00:06:38.231 sys 0m3.407s 00:06:38.231 12:50:42 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:38.231 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:38.489 12:50:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:38.489 12:50:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:38.489 12:50:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:38.489 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:38.489 ************************************ 00:06:38.489 START TEST dm_mount 00:06:38.489 ************************************ 00:06:38.489 12:50:42 -- common/autotest_common.sh@1099 -- # dm_mount 00:06:38.489 12:50:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:38.489 12:50:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:38.489 12:50:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:38.489 12:50:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:38.489 12:50:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:38.489 12:50:42 -- setup/common.sh@40 -- # local part_no=2 00:06:38.489 12:50:42 -- setup/common.sh@41 -- # local size=1073741824 00:06:38.489 12:50:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:38.489 12:50:42 -- setup/common.sh@44 -- # parts=() 00:06:38.489 12:50:42 -- setup/common.sh@44 -- # local parts 00:06:38.489 12:50:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:06:38.489 12:50:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:38.490 12:50:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:38.490 12:50:42 -- setup/common.sh@46 -- # (( part++ )) 00:06:38.490 12:50:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:38.490 12:50:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:38.490 12:50:42 -- setup/common.sh@46 -- # (( part++ )) 00:06:38.490 12:50:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:38.490 12:50:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:38.490 12:50:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:38.490 12:50:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:39.424 Creating new GPT entries in memory. 00:06:39.424 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:39.424 other utilities. 00:06:39.424 12:50:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:06:39.424 12:50:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:39.424 12:50:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:39.424 12:50:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:39.424 12:50:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:40.798 Creating new GPT entries in memory. 00:06:40.798 The operation has completed successfully. 00:06:40.798 12:50:44 -- setup/common.sh@57 -- # (( part++ )) 00:06:40.798 12:50:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:40.798 12:50:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:40.798 12:50:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:40.798 12:50:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:41.732 The operation has completed successfully. 00:06:41.732 12:50:45 -- setup/common.sh@57 -- # (( part++ )) 00:06:41.732 12:50:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:41.732 12:50:45 -- setup/common.sh@62 -- # wait 103219 00:06:41.732 12:50:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:41.732 12:50:45 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:41.732 12:50:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:41.732 12:50:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:41.732 12:50:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:06:41.732 12:50:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:41.732 12:50:45 -- setup/devices.sh@161 -- # break 00:06:41.732 12:50:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:41.732 12:50:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:41.732 12:50:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:41.732 12:50:45 -- setup/devices.sh@166 -- # dm=dm-0 00:06:41.732 12:50:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:41.732 12:50:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:41.732 12:50:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:41.732 12:50:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:41.732 12:50:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:41.732 12:50:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:41.732 12:50:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:41.732 12:50:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:41.732 12:50:45 -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:41.732 12:50:45 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:41.732 12:50:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:41.732 12:50:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:41.732 12:50:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:41.732 12:50:45 -- setup/devices.sh@53 -- # local found=0 00:06:41.732 12:50:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:41.732 12:50:45 -- setup/devices.sh@56 -- # : 00:06:41.732 12:50:45 -- setup/devices.sh@59 -- # local pci status 00:06:41.733 12:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:41.733 12:50:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:41.733 12:50:45 -- setup/devices.sh@47 -- # setup output config 00:06:41.733 12:50:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:41.733 12:50:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:41.990 12:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:41.990 12:50:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:41.990 12:50:45 -- setup/devices.sh@63 -- # found=1 00:06:41.990 12:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:41.990 12:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:41.990 12:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:41.990 12:50:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:41.990 12:50:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:42.974 12:50:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:42.974 12:50:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:42.974 12:50:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:42.974 12:50:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:42.974 12:50:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:42.974 12:50:47 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:42.974 12:50:47 -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:42.974 12:50:47 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:42.974 12:50:47 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:42.974 12:50:47 -- setup/devices.sh@50 -- # local mount_point= 00:06:42.974 12:50:47 -- setup/devices.sh@51 -- # local test_file= 00:06:42.974 12:50:47 -- setup/devices.sh@53 -- # local found=0 00:06:42.974 12:50:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:42.974 12:50:47 -- setup/devices.sh@59 -- # local pci status 00:06:42.974 12:50:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:42.974 12:50:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:42.974 12:50:47 -- setup/devices.sh@47 -- # setup output config 00:06:42.974 12:50:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.974 12:50:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:43.233 12:50:47 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:43.233 12:50:47 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:43.233 12:50:47 -- setup/devices.sh@63 -- # found=1 00:06:43.233 12:50:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:43.233 12:50:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:43.233 12:50:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:43.233 12:50:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:43.233 12:50:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:44.611 12:50:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:44.611 12:50:48 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:44.611 12:50:48 -- setup/devices.sh@68 -- # return 0 00:06:44.611 12:50:48 -- setup/devices.sh@187 -- # cleanup_dm 00:06:44.611 12:50:48 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:44.611 12:50:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:44.611 12:50:48 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:44.611 12:50:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:44.611 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:44.611 12:50:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:44.611 ************************************ 00:06:44.611 END TEST dm_mount 00:06:44.611 ************************************ 00:06:44.611 00:06:44.611 real 0m6.014s 00:06:44.611 user 0m0.467s 00:06:44.611 sys 0m2.335s 00:06:44.611 12:50:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:44.611 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.611 12:50:48 -- setup/devices.sh@1 -- # cleanup 00:06:44.611 12:50:48 -- setup/devices.sh@11 -- # cleanup_nvme 00:06:44.611 12:50:48 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:44.611 12:50:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:44.611 12:50:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:44.611 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:44.611 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:44.611 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:44.611 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:44.611 12:50:48 -- setup/devices.sh@12 -- # cleanup_dm 00:06:44.611 12:50:48 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:44.611 12:50:48 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:44.611 12:50:48 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:44.611 12:50:48 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:44.611 ************************************ 00:06:44.611 END TEST devices 00:06:44.611 ************************************ 00:06:44.611 00:06:44.611 real 0m12.965s 00:06:44.611 user 0m1.480s 00:06:44.611 sys 0m6.083s 00:06:44.611 12:50:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:44.611 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.611 00:06:44.611 real 0m26.896s 00:06:44.611 user 0m6.125s 00:06:44.611 sys 0m15.540s 00:06:44.611 12:50:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:06:44.611 12:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.611 ************************************ 00:06:44.611 END TEST setup.sh 00:06:44.611 ************************************ 00:06:44.611 12:50:48 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:44.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:44.870 Hugepages 00:06:44.870 node hugesize free / total 00:06:45.128 node0 1048576kB 0 / 0 00:06:45.128 node0 2048kB 2048 / 2048 00:06:45.128 00:06:45.128 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:45.128 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:45.128 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:45.128 12:50:49 -- spdk/autotest.sh@130 -- # uname -s 00:06:45.128 12:50:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:45.128 12:50:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:45.128 12:50:49 -- common/autotest_common.sh@1505 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:45.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:46.629 12:50:50 -- common/autotest_common.sh@1506 -- # sleep 1 00:06:47.564 12:50:51 -- common/autotest_common.sh@1507 -- # bdfs=() 00:06:47.564 12:50:51 -- common/autotest_common.sh@1507 -- # local bdfs 00:06:47.564 12:50:51 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:06:47.564 12:50:51 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:06:47.564 12:50:51 -- common/autotest_common.sh@1487 -- # bdfs=() 00:06:47.564 12:50:51 -- common/autotest_common.sh@1487 -- # local bdfs 00:06:47.564 12:50:51 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:47.564 12:50:51 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.564 12:50:51 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:06:47.822 12:50:51 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:06:47.822 12:50:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:06:47.822 12:50:51 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:48.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:48.079 Waiting for block devices as requested 00:06:48.079 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.079 12:50:52 -- common/autotest_common.sh@1512 -- # for bdf in "${bdfs[@]}" 00:06:48.079 12:50:52 -- common/autotest_common.sh@1513 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1476 -- # readlink -f /sys/class/nvme/nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1476 -- # grep 0000:00:10.0/nvme/nvme 00:06:48.079 12:50:52 -- common/autotest_common.sh@1476 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1477 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:06:48.079 12:50:52 -- common/autotest_common.sh@1481 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1481 -- # printf '%s\n' nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1513 -- # nvme_ctrlr=/dev/nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1514 -- # [[ -z /dev/nvme0 ]] 00:06:48.079 12:50:52 -- common/autotest_common.sh@1519 -- # nvme id-ctrl /dev/nvme0 00:06:48.079 12:50:52 -- common/autotest_common.sh@1519 -- # grep oacs 00:06:48.079 12:50:52 -- common/autotest_common.sh@1519 -- # cut -d: -f2 00:06:48.080 12:50:52 -- common/autotest_common.sh@1519 -- # oacs=' 0x12a' 00:06:48.080 12:50:52 -- common/autotest_common.sh@1520 -- # oacs_ns_manage=8 00:06:48.080 12:50:52 -- common/autotest_common.sh@1522 -- # [[ 8 -ne 0 ]] 00:06:48.080 12:50:52 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:06:48.080 12:50:52 -- common/autotest_common.sh@1528 -- # grep unvmcap 00:06:48.080 12:50:52 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:06:48.080 12:50:52 -- common/autotest_common.sh@1528 -- # unvmcap=' 0' 00:06:48.080 12:50:52 -- common/autotest_common.sh@1529 -- # [[ 0 -eq 0 ]] 00:06:48.080 12:50:52 -- common/autotest_common.sh@1531 -- # continue 00:06:48.080 12:50:52 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:48.080 12:50:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:48.080 12:50:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 12:50:52 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:48.080 12:50:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:48.080 12:50:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.080 12:50:52 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:48.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:48.646 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:49.581 12:50:53 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:49.581 12:50:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:49.581 12:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.581 12:50:53 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:49.581 12:50:53 -- common/autotest_common.sh@1565 -- # mapfile -t bdfs 00:06:49.581 12:50:53 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs_by_id 0x0a54 00:06:49.581 12:50:53 -- common/autotest_common.sh@1551 -- # bdfs=() 00:06:49.581 12:50:53 -- common/autotest_common.sh@1551 -- # local bdfs 00:06:49.581 12:50:53 -- common/autotest_common.sh@1553 -- # get_nvme_bdfs 00:06:49.581 12:50:53 -- common/autotest_common.sh@1487 -- # bdfs=() 00:06:49.581 12:50:53 -- common/autotest_common.sh@1487 -- # local bdfs 00:06:49.581 12:50:53 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:49.581 12:50:53 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:49.581 12:50:53 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:06:49.581 12:50:53 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:06:49.581 12:50:53 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:06:49.581 12:50:53 -- common/autotest_common.sh@1553 -- # for bdf in $(get_nvme_bdfs) 00:06:49.581 12:50:53 -- common/autotest_common.sh@1554 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:49.581 12:50:53 -- common/autotest_common.sh@1554 -- # device=0x0010 00:06:49.581 12:50:53 -- common/autotest_common.sh@1555 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:49.581 12:50:53 -- common/autotest_common.sh@1560 -- # printf '%s\n' 00:06:49.581 12:50:53 -- common/autotest_common.sh@1566 -- # [[ -z '' ]] 00:06:49.581 12:50:53 -- common/autotest_common.sh@1567 -- # return 0 00:06:49.581 12:50:53 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:06:49.581 12:50:53 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:49.581 12:50:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:49.581 12:50:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:49.581 12:50:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.841 ************************************ 00:06:49.841 START TEST unittest 00:06:49.841 ************************************ 00:06:49.841 12:50:53 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:49.841 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:49.841 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:49.841 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:49.841 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:49.841 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:49.841 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:49.841 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:49.841 ++ rpc_py=rpc_cmd 00:06:49.841 ++ set -e 00:06:49.841 ++ shopt -s nullglob 00:06:49.841 ++ shopt -s extglob 00:06:49.841 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:49.841 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:49.841 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:49.841 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:49.841 +++ CONFIG_FIO_PLUGIN=y 00:06:49.841 +++ CONFIG_NVME_CUSE=y 00:06:49.841 +++ CONFIG_RAID5F=y 00:06:49.841 +++ CONFIG_LTO=n 00:06:49.841 +++ CONFIG_SMA=n 00:06:49.841 +++ CONFIG_ISAL=y 00:06:49.841 +++ CONFIG_OPENSSL_PATH= 00:06:49.841 +++ CONFIG_IDXD_KERNEL=n 00:06:49.841 +++ CONFIG_URING_PATH= 00:06:49.841 +++ CONFIG_DAOS=n 00:06:49.841 +++ CONFIG_DPDK_LIB_DIR= 00:06:49.841 +++ CONFIG_OCF=n 00:06:49.841 +++ CONFIG_EXAMPLES=y 00:06:49.841 +++ CONFIG_RDMA_PROV=verbs 00:06:49.841 +++ CONFIG_ISCSI_INITIATOR=y 00:06:49.841 +++ CONFIG_VTUNE=n 00:06:49.841 +++ CONFIG_DPDK_INC_DIR= 00:06:49.841 +++ CONFIG_CET=n 00:06:49.841 +++ CONFIG_TESTS=y 00:06:49.841 +++ CONFIG_APPS=y 00:06:49.841 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:49.841 +++ CONFIG_DAOS_DIR= 00:06:49.841 +++ CONFIG_CRYPTO_MLX5=n 00:06:49.841 +++ CONFIG_XNVME=n 00:06:49.841 +++ CONFIG_UNIT_TESTS=y 00:06:49.841 +++ CONFIG_FUSE=n 00:06:49.841 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:49.841 +++ CONFIG_OCF_PATH= 00:06:49.841 +++ CONFIG_WPDK_DIR= 00:06:49.841 +++ CONFIG_VFIO_USER=n 00:06:49.841 +++ CONFIG_MAX_LCORES= 00:06:49.841 +++ CONFIG_ARCH=native 00:06:49.841 +++ CONFIG_TSAN=n 00:06:49.841 +++ CONFIG_VIRTIO=y 00:06:49.841 +++ CONFIG_HAVE_EVP_MAC=n 00:06:49.841 +++ CONFIG_IPSEC_MB=n 00:06:49.841 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:49.841 +++ CONFIG_ASAN=y 00:06:49.841 +++ CONFIG_SHARED=n 00:06:49.841 +++ CONFIG_VTUNE_DIR= 00:06:49.841 +++ CONFIG_RDMA_SET_TOS=y 00:06:49.841 +++ CONFIG_VBDEV_COMPRESS=n 00:06:49.841 +++ CONFIG_VFIO_USER_DIR= 00:06:49.841 +++ CONFIG_PGO_DIR= 00:06:49.841 +++ CONFIG_FUZZER_LIB= 00:06:49.841 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:49.841 +++ CONFIG_USDT=n 00:06:49.841 +++ CONFIG_HAVE_KEYUTILS=y 00:06:49.841 +++ CONFIG_URING_ZNS=n 00:06:49.841 +++ CONFIG_FC_PATH= 00:06:49.841 +++ CONFIG_COVERAGE=y 00:06:49.841 +++ CONFIG_CUSTOMOCF=n 00:06:49.841 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:49.841 +++ CONFIG_WERROR=y 00:06:49.841 +++ CONFIG_DEBUG=y 00:06:49.841 +++ CONFIG_RDMA=y 00:06:49.841 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:49.841 +++ CONFIG_FUZZER=n 00:06:49.841 +++ CONFIG_FC=n 00:06:49.842 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:49.842 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:49.842 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:49.842 +++ CONFIG_CROSS_PREFIX= 00:06:49.842 +++ CONFIG_PREFIX=/usr/local 00:06:49.842 +++ CONFIG_HAVE_LIBBSD=n 00:06:49.842 +++ CONFIG_UBSAN=y 00:06:49.842 +++ CONFIG_PGO_CAPTURE=n 00:06:49.842 +++ CONFIG_UBLK=n 00:06:49.842 +++ CONFIG_ISAL_CRYPTO=y 00:06:49.842 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:49.842 +++ CONFIG_CRYPTO=n 00:06:49.842 +++ CONFIG_RBD=n 00:06:49.842 +++ CONFIG_LIBDIR= 00:06:49.842 +++ CONFIG_IPSEC_MB_DIR= 00:06:49.842 +++ CONFIG_PGO_USE=n 00:06:49.842 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:49.842 +++ CONFIG_GOLANG=n 00:06:49.842 +++ CONFIG_VHOST=y 00:06:49.842 +++ CONFIG_IDXD=y 00:06:49.842 +++ CONFIG_AVAHI=n 00:06:49.842 +++ CONFIG_URING=n 00:06:49.842 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:49.842 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:49.842 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:49.842 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:49.842 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:49.842 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:49.842 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:49.842 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:49.842 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:49.842 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:49.842 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:49.842 +++ VHOST_APP=("$_app_dir/vhost") 00:06:49.842 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:49.842 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:49.842 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:49.842 +++ [[ #ifndef SPDK_CONFIG_H 00:06:49.842 #define SPDK_CONFIG_H 00:06:49.842 #define SPDK_CONFIG_APPS 1 00:06:49.842 #define SPDK_CONFIG_ARCH native 00:06:49.842 #define SPDK_CONFIG_ASAN 1 00:06:49.842 #undef SPDK_CONFIG_AVAHI 00:06:49.842 #undef SPDK_CONFIG_CET 00:06:49.842 #define SPDK_CONFIG_COVERAGE 1 00:06:49.842 #define SPDK_CONFIG_CROSS_PREFIX 00:06:49.842 #undef SPDK_CONFIG_CRYPTO 00:06:49.842 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:49.842 #undef SPDK_CONFIG_CUSTOMOCF 00:06:49.842 #undef SPDK_CONFIG_DAOS 00:06:49.842 #define SPDK_CONFIG_DAOS_DIR 00:06:49.842 #define SPDK_CONFIG_DEBUG 1 00:06:49.842 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:49.842 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:49.842 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:49.842 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:49.842 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:49.842 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:49.842 #define SPDK_CONFIG_EXAMPLES 1 00:06:49.842 #undef SPDK_CONFIG_FC 00:06:49.842 #define SPDK_CONFIG_FC_PATH 00:06:49.842 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:49.842 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:49.842 #undef SPDK_CONFIG_FUSE 00:06:49.842 #undef SPDK_CONFIG_FUZZER 00:06:49.842 #define SPDK_CONFIG_FUZZER_LIB 00:06:49.842 #undef SPDK_CONFIG_GOLANG 00:06:49.842 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:49.842 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:06:49.842 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:49.842 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:49.842 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:49.842 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:49.842 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:49.842 #define SPDK_CONFIG_IDXD 1 00:06:49.842 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:49.842 #undef SPDK_CONFIG_IPSEC_MB 00:06:49.842 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:49.842 #define SPDK_CONFIG_ISAL 1 00:06:49.842 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:49.842 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:49.842 #define SPDK_CONFIG_LIBDIR 00:06:49.842 #undef SPDK_CONFIG_LTO 00:06:49.842 #define SPDK_CONFIG_MAX_LCORES 00:06:49.842 #define SPDK_CONFIG_NVME_CUSE 1 00:06:49.842 #undef SPDK_CONFIG_OCF 00:06:49.842 #define SPDK_CONFIG_OCF_PATH 00:06:49.842 #define SPDK_CONFIG_OPENSSL_PATH 00:06:49.842 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:49.842 #define SPDK_CONFIG_PGO_DIR 00:06:49.842 #undef SPDK_CONFIG_PGO_USE 00:06:49.842 #define SPDK_CONFIG_PREFIX /usr/local 00:06:49.842 #define SPDK_CONFIG_RAID5F 1 00:06:49.842 #undef SPDK_CONFIG_RBD 00:06:49.842 #define SPDK_CONFIG_RDMA 1 00:06:49.842 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:49.842 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:49.842 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:49.842 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:49.842 #undef SPDK_CONFIG_SHARED 00:06:49.842 #undef SPDK_CONFIG_SMA 00:06:49.842 #define SPDK_CONFIG_TESTS 1 00:06:49.842 #undef SPDK_CONFIG_TSAN 00:06:49.842 #undef SPDK_CONFIG_UBLK 00:06:49.842 #define SPDK_CONFIG_UBSAN 1 00:06:49.842 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:49.842 #undef SPDK_CONFIG_URING 00:06:49.842 #define SPDK_CONFIG_URING_PATH 00:06:49.842 #undef SPDK_CONFIG_URING_ZNS 00:06:49.842 #undef SPDK_CONFIG_USDT 00:06:49.842 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:49.842 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:49.842 #undef SPDK_CONFIG_VFIO_USER 00:06:49.842 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:49.842 #define SPDK_CONFIG_VHOST 1 00:06:49.842 #define SPDK_CONFIG_VIRTIO 1 00:06:49.842 #undef SPDK_CONFIG_VTUNE 00:06:49.842 #define SPDK_CONFIG_VTUNE_DIR 00:06:49.842 #define SPDK_CONFIG_WERROR 1 00:06:49.842 #define SPDK_CONFIG_WPDK_DIR 00:06:49.842 #undef SPDK_CONFIG_XNVME 00:06:49.842 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:49.842 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:49.842 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.842 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:49.842 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.842 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.842 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:49.842 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:49.842 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:49.842 ++++ export PATH 00:06:49.842 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:49.842 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:49.842 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:49.842 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:49.842 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:49.842 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:49.842 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:49.842 +++ TEST_TAG=N/A 00:06:49.842 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:49.842 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:49.842 ++++ uname -s 00:06:49.842 +++ PM_OS=Linux 00:06:49.842 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:49.842 +++ [[ Linux == FreeBSD ]] 00:06:49.842 +++ [[ Linux == Linux ]] 00:06:49.842 +++ [[ QEMU != QEMU ]] 00:06:49.842 +++ MONITOR_RESOURCES_PIDS=() 00:06:49.842 +++ declare -A MONITOR_RESOURCES_PIDS 00:06:49.842 +++ mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:06:49.842 ++ : 0 00:06:49.842 ++ export RUN_NIGHTLY 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_RUN_VALGRIND 00:06:49.842 ++ : 1 00:06:49.842 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:49.842 ++ : 1 00:06:49.842 ++ export SPDK_TEST_UNITTEST 00:06:49.842 ++ : 00:06:49.842 ++ export SPDK_TEST_AUTOBUILD 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_RELEASE_BUILD 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_ISAL 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_ISCSI 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:49.842 ++ : 1 00:06:49.842 ++ export SPDK_TEST_NVME 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVME_PMR 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVME_BP 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVME_CLI 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVME_CUSE 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVME_FDP 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_NVMF 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_VFIOUSER 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_FUZZER 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_FUZZER_SHORT 00:06:49.842 ++ : rdma 00:06:49.842 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_RBD 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_VHOST 00:06:49.842 ++ : 1 00:06:49.842 ++ export SPDK_TEST_BLOCKDEV 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_IOAT 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_BLOBFS 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_VHOST_INIT 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_LVOL 00:06:49.842 ++ : 0 00:06:49.842 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:49.842 ++ : 1 00:06:49.843 ++ export SPDK_RUN_ASAN 00:06:49.843 ++ : 1 00:06:49.843 ++ export SPDK_RUN_UBSAN 00:06:49.843 ++ : 00:06:49.843 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_RUN_NON_ROOT 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_CRYPTO 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_FTL 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_OCF 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_VMD 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_OPAL 00:06:49.843 ++ : 00:06:49.843 ++ export SPDK_TEST_NATIVE_DPDK 00:06:49.843 ++ : true 00:06:49.843 ++ export SPDK_AUTOTEST_X 00:06:49.843 ++ : 1 00:06:49.843 ++ export SPDK_TEST_RAID5 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_URING 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_USDT 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_USE_IGB_UIO 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_SCHEDULER 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_SCANBUILD 00:06:49.843 ++ : 00:06:49.843 ++ export SPDK_TEST_NVMF_NICS 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_SMA 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_DAOS 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_XNVME 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_ACCEL_DSA 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_ACCEL_IAA 00:06:49.843 ++ : 00:06:49.843 ++ export SPDK_TEST_FUZZER_TARGET 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_TEST_NVMF_MDNS 00:06:49.843 ++ : 0 00:06:49.843 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:49.843 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:49.843 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:49.843 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:49.843 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:49.843 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:49.843 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:49.843 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:49.843 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:49.843 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.843 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:49.843 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.843 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.843 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:49.843 ++ PYTHONDONTWRITEBYTECODE=1 00:06:49.843 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.843 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:49.843 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.843 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:49.843 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:49.843 ++ rm -rf /var/tmp/asan_suppression_file 00:06:49.843 ++ cat 00:06:49.843 ++ echo leak:libfuse3.so 00:06:49.843 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.843 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:49.843 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.843 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:49.843 ++ '[' -z /var/spdk/dependencies ']' 00:06:49.843 ++ export DEPENDENCY_DIR 00:06:49.843 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:49.843 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:49.843 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:49.843 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:49.843 ++ export QEMU_BIN= 00:06:49.843 ++ QEMU_BIN= 00:06:49.843 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:49.843 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:49.843 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:49.843 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:49.843 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.843 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:49.843 ++ '[' 0 -eq 0 ']' 00:06:49.843 ++ export valgrind= 00:06:49.843 ++ valgrind= 00:06:49.843 +++ uname -s 00:06:49.843 ++ '[' Linux = Linux ']' 00:06:49.843 ++ HUGEMEM=4096 00:06:49.843 ++ export CLEAR_HUGE=yes 00:06:49.843 ++ CLEAR_HUGE=yes 00:06:49.843 ++ [[ 0 -eq 1 ]] 00:06:49.843 ++ [[ 0 -eq 1 ]] 00:06:49.843 ++ MAKE=make 00:06:49.843 +++ nproc 00:06:49.843 ++ MAKEFLAGS=-j10 00:06:49.843 ++ export HUGEMEM=4096 00:06:49.843 ++ HUGEMEM=4096 00:06:49.843 ++ NO_HUGE=() 00:06:49.843 ++ TEST_MODE= 00:06:49.843 ++ [[ -z '' ]] 00:06:49.843 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:49.843 ++ exec 00:06:49.843 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:49.843 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:49.843 ++ set_test_storage 2147483648 00:06:49.843 ++ [[ -v testdir ]] 00:06:49.843 ++ local requested_size=2147483648 00:06:49.843 ++ local mount target_dir 00:06:49.843 ++ local -A mounts fss sizes avails uses 00:06:49.843 ++ local source fs size avail mount use 00:06:49.843 ++ local storage_fallback storage_candidates 00:06:49.843 +++ mktemp -udt spdk.XXXXXX 00:06:49.843 ++ storage_fallback=/tmp/spdk.hOnOTr 00:06:49.843 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:49.843 ++ [[ -n '' ]] 00:06:49.843 ++ [[ -n '' ]] 00:06:49.843 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.hOnOTr/tests/unit /tmp/spdk.hOnOTr 00:06:49.843 ++ requested_size=2214592512 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 +++ df -T 00:06:49.843 +++ grep -v Filesystem 00:06:49.843 ++ mounts["$mount"]=udev 00:06:49.843 ++ fss["$mount"]=devtmpfs 00:06:49.843 ++ avails["$mount"]=6224465920 00:06:49.843 ++ sizes["$mount"]=6224465920 00:06:49.843 ++ uses["$mount"]=0 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=tmpfs 00:06:49.843 ++ fss["$mount"]=tmpfs 00:06:49.843 ++ avails["$mount"]=1253408768 00:06:49.843 ++ sizes["$mount"]=1254514688 00:06:49.843 ++ uses["$mount"]=1105920 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=/dev/vda1 00:06:49.843 ++ fss["$mount"]=ext4 00:06:49.843 ++ avails["$mount"]=10679042048 00:06:49.843 ++ sizes["$mount"]=20616794112 00:06:49.843 ++ uses["$mount"]=9920974848 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=tmpfs 00:06:49.843 ++ fss["$mount"]=tmpfs 00:06:49.843 ++ avails["$mount"]=6272565248 00:06:49.843 ++ sizes["$mount"]=6272565248 00:06:49.843 ++ uses["$mount"]=0 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=tmpfs 00:06:49.843 ++ fss["$mount"]=tmpfs 00:06:49.843 ++ avails["$mount"]=5242880 00:06:49.843 ++ sizes["$mount"]=5242880 00:06:49.843 ++ uses["$mount"]=0 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=tmpfs 00:06:49.843 ++ fss["$mount"]=tmpfs 00:06:49.843 ++ avails["$mount"]=6272565248 00:06:49.843 ++ sizes["$mount"]=6272565248 00:06:49.843 ++ uses["$mount"]=0 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=/dev/vda15 00:06:49.843 ++ fss["$mount"]=vfat 00:06:49.843 ++ avails["$mount"]=103089152 00:06:49.843 ++ sizes["$mount"]=109422592 00:06:49.843 ++ uses["$mount"]=6334464 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=/dev/loop2 00:06:49.843 ++ fss["$mount"]=squashfs 00:06:49.843 ++ avails["$mount"]=0 00:06:49.843 ++ sizes["$mount"]=41025536 00:06:49.843 ++ uses["$mount"]=41025536 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=/dev/loop1 00:06:49.843 ++ fss["$mount"]=squashfs 00:06:49.843 ++ avails["$mount"]=0 00:06:49.843 ++ sizes["$mount"]=67108864 00:06:49.843 ++ uses["$mount"]=67108864 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=/dev/loop0 00:06:49.843 ++ fss["$mount"]=squashfs 00:06:49.843 ++ avails["$mount"]=0 00:06:49.843 ++ sizes["$mount"]=96337920 00:06:49.843 ++ uses["$mount"]=96337920 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=tmpfs 00:06:49.843 ++ fss["$mount"]=tmpfs 00:06:49.843 ++ avails["$mount"]=1254510592 00:06:49.843 ++ sizes["$mount"]=1254510592 00:06:49.843 ++ uses["$mount"]=0 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:06:49.843 ++ fss["$mount"]=fuse.sshfs 00:06:49.843 ++ avails["$mount"]=93522841600 00:06:49.843 ++ sizes["$mount"]=105088212992 00:06:49.843 ++ uses["$mount"]=6179938304 00:06:49.843 ++ read -r source fs size use avail _ mount 00:06:49.843 ++ printf '* Looking for test storage...\n' 00:06:49.843 * Looking for test storage... 00:06:49.843 ++ local target_space new_size 00:06:49.843 ++ for target_dir in "${storage_candidates[@]}" 00:06:49.843 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:49.843 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:49.843 ++ mount=/ 00:06:49.843 ++ target_space=10679042048 00:06:49.843 ++ (( target_space == 0 || target_space < requested_size )) 00:06:49.843 ++ (( target_space >= requested_size )) 00:06:49.843 ++ [[ ext4 == tmpfs ]] 00:06:49.843 ++ [[ ext4 == ramfs ]] 00:06:49.844 ++ [[ / == / ]] 00:06:49.844 ++ new_size=12135567360 00:06:49.844 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:49.844 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:49.844 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:49.844 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:49.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:49.844 ++ return 0 00:06:49.844 ++ set -o errtrace 00:06:49.844 ++ shopt -s extdebug 00:06:49.844 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:49.844 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:49.844 12:50:53 -- common/autotest_common.sh@1661 -- # true 00:06:49.844 12:50:53 -- common/autotest_common.sh@1663 -- # xtrace_fd 00:06:49.844 12:50:53 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:49.844 12:50:53 -- common/autotest_common.sh@29 -- # exec 00:06:49.844 12:50:53 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:49.844 12:50:53 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:49.844 12:50:53 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:49.844 12:50:53 -- common/autotest_common.sh@18 -- # set -x 00:06:49.844 12:50:53 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:49.844 12:50:53 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:49.844 12:50:53 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:49.844 12:50:53 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:49.844 12:50:53 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:49.844 12:50:53 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:06:49.844 12:50:53 -- unit/unittest.sh@179 -- # hash lcov 00:06:49.844 12:50:53 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:49.844 12:50:53 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:49.844 12:50:53 -- unit/unittest.sh@180 -- # cov_avail=yes 00:06:49.844 12:50:53 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:06:49.844 12:50:53 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:49.844 12:50:53 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:49.844 12:50:53 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:49.844 12:50:53 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:06:49.844 --rc lcov_branch_coverage=1 00:06:49.844 --rc lcov_function_coverage=1 00:06:49.844 --rc genhtml_branch_coverage=1 00:06:49.844 --rc genhtml_function_coverage=1 00:06:49.844 --rc genhtml_legend=1 00:06:49.844 --rc geninfo_all_blocks=1 00:06:49.844 ' 00:06:49.844 12:50:53 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:06:49.844 --rc lcov_branch_coverage=1 00:06:49.844 --rc lcov_function_coverage=1 00:06:49.844 --rc genhtml_branch_coverage=1 00:06:49.844 --rc genhtml_function_coverage=1 00:06:49.844 --rc genhtml_legend=1 00:06:49.844 --rc geninfo_all_blocks=1 00:06:49.844 ' 00:06:49.844 12:50:53 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:06:49.844 --rc lcov_branch_coverage=1 00:06:49.844 --rc lcov_function_coverage=1 00:06:49.844 --rc genhtml_branch_coverage=1 00:06:49.844 --rc genhtml_function_coverage=1 00:06:49.844 --rc genhtml_legend=1 00:06:49.844 --rc geninfo_all_blocks=1 00:06:49.844 --no-external' 00:06:49.844 12:50:53 -- unit/unittest.sh@200 -- # LCOV='lcov 00:06:49.844 --rc lcov_branch_coverage=1 00:06:49.844 --rc lcov_function_coverage=1 00:06:49.844 --rc genhtml_branch_coverage=1 00:06:49.844 --rc genhtml_function_coverage=1 00:06:49.844 --rc genhtml_legend=1 00:06:49.844 --rc geninfo_all_blocks=1 00:06:49.844 --no-external' 00:06:49.844 12:50:53 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:51.744 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:51.744 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:51.745 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:51.745 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:52.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:52.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:52.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:52.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:52.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:52.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:52.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:52.004 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:52.004 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:52.005 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:52.005 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:38.727 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:07:38.727 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:07:38.728 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:07:38.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:07:38.728 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:07:38.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:07:44.030 12:51:47 -- unit/unittest.sh@206 -- # uname -m 00:07:44.030 12:51:47 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:07:44.030 12:51:47 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:44.030 12:51:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:44.030 12:51:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:44.030 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 ************************************ 00:07:44.030 START TEST unittest_pci_event 00:07:44.030 ************************************ 00:07:44.030 12:51:47 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:44.030 00:07:44.030 00:07:44.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.030 http://cunit.sourceforge.net/ 00:07:44.030 00:07:44.030 00:07:44.030 Suite: pci_event 00:07:44.030 Test: test_pci_parse_event ...[2024-04-17 12:51:47.306075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:44.030 [2024-04-17 12:51:47.306619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:44.030 passed 00:07:44.030 00:07:44.030 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.030 suites 1 1 n/a 0 0 00:07:44.030 tests 1 1 1 0 0 00:07:44.030 asserts 15 15 15 0 n/a 00:07:44.030 00:07:44.030 Elapsed time = 0.001 seconds 00:07:44.030 00:07:44.030 real 0m0.040s 00:07:44.030 user 0m0.018s 00:07:44.030 sys 0m0.019s 00:07:44.030 12:51:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:07:44.030 ************************************ 00:07:44.030 END TEST unittest_pci_event 00:07:44.030 ************************************ 00:07:44.030 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 12:51:47 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:44.030 12:51:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:44.030 12:51:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:44.030 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 ************************************ 00:07:44.030 START TEST unittest_include 00:07:44.030 ************************************ 00:07:44.030 12:51:47 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:44.030 00:07:44.030 00:07:44.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.030 http://cunit.sourceforge.net/ 00:07:44.030 00:07:44.030 00:07:44.030 Suite: histogram 00:07:44.030 Test: histogram_test ...passed 00:07:44.030 Test: histogram_merge ...passed 00:07:44.030 00:07:44.030 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.030 suites 1 1 n/a 0 0 00:07:44.030 tests 2 2 2 0 0 00:07:44.030 asserts 50 50 50 0 n/a 00:07:44.030 00:07:44.030 Elapsed time = 0.005 seconds 00:07:44.030 00:07:44.030 real 0m0.032s 00:07:44.030 user 0m0.020s 00:07:44.030 sys 0m0.013s 00:07:44.030 12:51:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:07:44.030 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 ************************************ 00:07:44.030 END TEST unittest_include 00:07:44.030 ************************************ 00:07:44.030 12:51:47 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:07:44.030 12:51:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:44.030 12:51:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:44.030 12:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 ************************************ 00:07:44.030 START TEST unittest_bdev 00:07:44.030 ************************************ 00:07:44.030 12:51:47 -- common/autotest_common.sh@1099 -- # unittest_bdev 00:07:44.030 12:51:47 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:44.030 00:07:44.030 00:07:44.030 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.030 http://cunit.sourceforge.net/ 00:07:44.030 00:07:44.030 00:07:44.030 Suite: bdev 00:07:44.030 Test: bytes_to_blocks_test ...passed 00:07:44.030 Test: num_blocks_test ...passed 00:07:44.030 Test: io_valid_test ...passed 00:07:44.030 Test: open_write_test ...[2024-04-17 12:51:47.568896] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:44.030 [2024-04-17 12:51:47.569311] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:44.030 [2024-04-17 12:51:47.569510] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:44.030 passed 00:07:44.030 Test: claim_test ...passed 00:07:44.030 Test: alias_add_del_test ...[2024-04-17 12:51:47.646736] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:44.030 [2024-04-17 12:51:47.646999] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:44.030 [2024-04-17 12:51:47.647078] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:44.030 passed 00:07:44.030 Test: get_device_stat_test ...passed 00:07:44.030 Test: bdev_io_types_test ...passed 00:07:44.030 Test: bdev_io_wait_test ...passed 00:07:44.031 Test: bdev_io_spans_split_test ...passed 00:07:44.031 Test: bdev_io_boundary_split_test ...passed 00:07:44.031 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-17 12:51:47.802710] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:44.031 passed 00:07:44.031 Test: bdev_io_mix_split_test ...passed 00:07:44.031 Test: bdev_io_split_with_io_wait ...passed 00:07:44.031 Test: bdev_io_write_unit_split_test ...[2024-04-17 12:51:47.929475] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:44.031 [2024-04-17 12:51:47.929817] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:44.031 [2024-04-17 12:51:47.929881] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:44.031 [2024-04-17 12:51:47.930030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:44.031 passed 00:07:44.031 Test: bdev_io_alignment_with_boundary ...passed 00:07:44.031 Test: bdev_io_alignment ...passed 00:07:44.031 Test: bdev_histograms ...passed 00:07:44.031 Test: bdev_write_zeroes ...passed 00:07:44.031 Test: bdev_compare_and_write ...passed 00:07:44.289 Test: bdev_compare ...passed 00:07:44.289 Test: bdev_compare_emulated ...passed 00:07:44.289 Test: bdev_zcopy_write ...passed 00:07:44.289 Test: bdev_zcopy_read ...passed 00:07:44.289 Test: bdev_open_while_hotremove ...passed 00:07:44.289 Test: bdev_close_while_hotremove ...passed 00:07:44.289 Test: bdev_open_ext_test ...[2024-04-17 12:51:48.418793] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8094:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:44.289 passed 00:07:44.289 Test: bdev_open_ext_unregister ...[2024-04-17 12:51:48.419277] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8094:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:44.289 passed 00:07:44.548 Test: bdev_set_io_timeout ...passed 00:07:44.548 Test: bdev_set_qd_sampling ...passed 00:07:44.548 Test: lba_range_overlap ...passed 00:07:44.548 Test: lock_lba_range_check_ranges ...passed 00:07:44.548 Test: lock_lba_range_with_io_outstanding ...passed 00:07:44.548 Test: lock_lba_range_overlapped ...passed 00:07:44.548 Test: bdev_quiesce ...[2024-04-17 12:51:48.667992] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10017:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:44.548 passed 00:07:44.806 Test: bdev_io_abort ...passed 00:07:44.806 Test: bdev_unmap ...passed 00:07:44.806 Test: bdev_write_zeroes_split_test ...passed 00:07:44.806 Test: bdev_set_options_test ...[2024-04-17 12:51:48.809632] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 483:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:44.806 passed 00:07:44.806 Test: bdev_get_memory_domains ...passed 00:07:44.806 Test: bdev_io_ext ...passed 00:07:44.806 Test: bdev_io_ext_no_opts ...passed 00:07:44.806 Test: bdev_io_ext_invalid_opts ...passed 00:07:45.064 Test: bdev_io_ext_split ...passed 00:07:45.064 Test: bdev_io_ext_bounce_buffer ...passed 00:07:45.064 Test: bdev_register_uuid_alias ...[2024-04-17 12:51:49.041211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name d188f3b6-d0f2-4fbf-b9a0-9cd875a81bb2 already exists 00:07:45.064 [2024-04-17 12:51:49.041485] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:d188f3b6-d0f2-4fbf-b9a0-9cd875a81bb2 alias for bdev bdev0 00:07:45.064 passed 00:07:45.064 Test: bdev_unregister_by_name ...[2024-04-17 12:51:49.065515] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7884:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:45.064 [2024-04-17 12:51:49.065667] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7892:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:45.064 passed 00:07:45.064 Test: for_each_bdev_test ...passed 00:07:45.064 Test: bdev_seek_test ...passed 00:07:45.064 Test: bdev_copy ...passed 00:07:45.064 Test: bdev_copy_split_test ...passed 00:07:45.064 Test: examine_locks ...passed 00:07:45.064 Test: claim_v2_rwo ...[2024-04-17 12:51:49.197787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.197972] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8618:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.198088] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.198236] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.198294] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.198438] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8613:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:45.064 passed 00:07:45.064 Test: claim_v2_rom ...[2024-04-17 12:51:49.198846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.199009] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.199148] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.199225] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.199362] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8656:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:45.064 [2024-04-17 12:51:49.199496] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:45.064 passed 00:07:45.064 Test: claim_v2_rwm ...[2024-04-17 12:51:49.199889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8686:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:45.064 [2024-04-17 12:51:49.200049] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.200116] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.200322] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:45.064 [2024-04-17 12:51:49.200460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.200521] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8706:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.200659] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8686:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:45.065 passed 00:07:45.065 Test: claim_v2_existing_writer ...[2024-04-17 12:51:49.201071] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:45.065 [2024-04-17 12:51:49.201205] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:45.065 passed 00:07:45.065 Test: claim_v2_existing_v1 ...[2024-04-17 12:51:49.201460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.201566] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.201648] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:45.065 passed 00:07:45.065 Test: claim_v1_existing_v2 ...[2024-04-17 12:51:49.202044] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.202195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:45.065 [2024-04-17 12:51:49.202320] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:45.065 passed 00:07:45.065 Test: examine_claimed ...[2024-04-17 12:51:49.202723] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:45.065 passed 00:07:45.065 00:07:45.065 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.065 suites 1 1 n/a 0 0 00:07:45.065 tests 59 59 59 0 0 00:07:45.065 asserts 4599 4599 4599 0 n/a 00:07:45.065 00:07:45.065 Elapsed time = 1.678 seconds 00:07:45.378 12:51:49 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:45.378 00:07:45.378 00:07:45.378 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.378 http://cunit.sourceforge.net/ 00:07:45.378 00:07:45.378 00:07:45.378 Suite: nvme 00:07:45.378 Test: test_create_ctrlr ...passed 00:07:45.378 Test: test_reset_ctrlr ...[2024-04-17 12:51:49.255126] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 passed 00:07:45.378 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:45.378 Test: test_failover_ctrlr ...passed 00:07:45.378 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-17 12:51:49.258721] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 [2024-04-17 12:51:49.259036] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 [2024-04-17 12:51:49.259327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 passed 00:07:45.378 Test: test_pending_reset ...[2024-04-17 12:51:49.261531] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 [2024-04-17 12:51:49.261916] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 passed 00:07:45.378 Test: test_attach_ctrlr ...[2024-04-17 12:51:49.263403] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4264:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:45.378 passed 00:07:45.378 Test: test_aer_cb ...passed 00:07:45.378 Test: test_submit_nvme_cmd ...passed 00:07:45.378 Test: test_add_remove_trid ...passed 00:07:45.378 Test: test_abort ...[2024-04-17 12:51:49.268193] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7367:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:45.378 passed 00:07:45.378 Test: test_get_io_qpair ...passed 00:07:45.378 Test: test_bdev_unregister ...passed 00:07:45.378 Test: test_compare_ns ...passed 00:07:45.378 Test: test_init_ana_log_page ...passed 00:07:45.378 Test: test_get_memory_domains ...passed 00:07:45.378 Test: test_reconnect_qpair ...[2024-04-17 12:51:49.271905] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.378 passed 00:07:45.378 Test: test_create_bdev_ctrlr ...[2024-04-17 12:51:49.272686] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5315:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:45.378 passed 00:07:45.378 Test: test_add_multi_ns_to_bdev ...[2024-04-17 12:51:49.274473] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4520:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:45.378 passed 00:07:45.378 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:45.378 Test: test_admin_path ...passed 00:07:45.378 Test: test_reset_bdev_ctrlr ...passed 00:07:45.378 Test: test_find_io_path ...passed 00:07:45.379 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:45.379 Test: test_retry_io_for_io_path_error ...passed 00:07:45.379 Test: test_retry_io_count ...passed 00:07:45.379 Test: test_concurrent_read_ana_log_page ...passed 00:07:45.379 Test: test_retry_io_for_ana_error ...passed 00:07:45.379 Test: test_check_io_error_resiliency_params ...[2024-04-17 12:51:49.284767] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5997:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:45.379 [2024-04-17 12:51:49.284886] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6001:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:45.379 [2024-04-17 12:51:49.284943] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6010:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:45.379 [2024-04-17 12:51:49.284990] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6013:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:45.379 [2024-04-17 12:51:49.285028] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6025:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:45.379 [2024-04-17 12:51:49.285077] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6025:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:45.379 [2024-04-17 12:51:49.285113] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6005:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:45.379 [2024-04-17 12:51:49.285179] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6020:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:45.379 [2024-04-17 12:51:49.285225] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6017:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:45.379 passed 00:07:45.379 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:07:45.379 Test: test_reconnect_ctrlr ...[2024-04-17 12:51:49.286432] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.286677] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.287098] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.287292] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.287458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 passed 00:07:45.379 Test: test_retry_failover_ctrlr ...[2024-04-17 12:51:49.288034] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 passed 00:07:45.379 Test: test_fail_path ...[2024-04-17 12:51:49.288844] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.289089] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.289269] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.289450] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.289681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 passed 00:07:45.379 Test: test_nvme_ns_cmp ...passed 00:07:45.379 Test: test_ana_transition ...passed 00:07:45.379 Test: test_set_preferred_path ...passed 00:07:45.379 Test: test_find_next_io_path ...passed 00:07:45.379 Test: test_find_io_path_min_qd ...passed 00:07:45.379 Test: test_disable_auto_failback ...[2024-04-17 12:51:49.292197] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 passed 00:07:45.379 Test: test_set_multipath_policy ...passed 00:07:45.379 Test: test_uuid_generation ...passed 00:07:45.379 Test: test_retry_io_to_same_path ...passed 00:07:45.379 Test: test_race_between_reset_and_disconnected ...passed 00:07:45.379 Test: test_ctrlr_op_rpc ...passed 00:07:45.379 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:45.379 Test: test_disable_enable_ctrlr ...[2024-04-17 12:51:49.297601] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 [2024-04-17 12:51:49.297842] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2048:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:45.379 passed 00:07:45.379 Test: test_delete_ctrlr_done ...passed 00:07:45.379 Test: test_ns_remove_during_reset ...passed 00:07:45.379 00:07:45.379 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.379 suites 1 1 n/a 0 0 00:07:45.379 tests 48 48 48 0 0 00:07:45.379 asserts 3558 3558 3558 0 n/a 00:07:45.379 00:07:45.379 Elapsed time = 0.046 seconds 00:07:45.379 12:51:49 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:45.379 00:07:45.379 00:07:45.379 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.379 http://cunit.sourceforge.net/ 00:07:45.379 00:07:45.379 Test Options 00:07:45.379 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:07:45.379 00:07:45.379 Suite: raid 00:07:45.379 Test: test_create_raid ...passed 00:07:45.379 Test: test_create_raid_superblock ...passed 00:07:45.379 Test: test_delete_raid ...passed 00:07:45.379 Test: test_create_raid_invalid_args ...[2024-04-17 12:51:49.346126] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:45.379 [2024-04-17 12:51:49.346825] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:45.379 [2024-04-17 12:51:49.347474] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:45.379 [2024-04-17 12:51:49.347938] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3082:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:45.379 [2024-04-17 12:51:49.348953] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3082:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:45.379 passed 00:07:45.379 Test: test_delete_raid_invalid_args ...passed 00:07:45.379 Test: test_io_channel ...passed 00:07:45.379 Test: test_reset_io ...passed 00:07:45.379 Test: test_write_io ...passed 00:07:45.379 Test: test_read_io ...passed 00:07:46.313 Test: test_unmap_io ...passed 00:07:46.313 Test: test_io_failure ...[2024-04-17 12:51:50.421432] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:07:46.313 passed 00:07:46.313 Test: test_multi_raid_no_io ...passed 00:07:46.313 Test: test_multi_raid_with_io ...passed 00:07:46.313 Test: test_io_type_supported ...passed 00:07:46.313 Test: test_raid_json_dump_info ...passed 00:07:46.313 Test: test_context_size ...passed 00:07:46.313 Test: test_raid_level_conversions ...passed 00:07:46.313 Test: test_raid_io_split ...passedTest Options 00:07:46.313 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:07:46.313 00:07:46.313 Suite: raid_dif 00:07:46.313 Test: test_create_raid ...passed 00:07:46.313 Test: test_create_raid_superblock ...passed 00:07:46.313 Test: test_delete_raid ...passed 00:07:46.313 Test: test_create_raid_invalid_args ...[2024-04-17 12:51:50.428817] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:46.313 [2024-04-17 12:51:50.428933] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:46.313 [2024-04-17 12:51:50.429157] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:46.313 [2024-04-17 12:51:50.429236] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3082:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:46.313 [2024-04-17 12:51:50.429820] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3082:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:46.313 passed 00:07:46.313 Test: test_delete_raid_invalid_args ...passed 00:07:46.313 Test: test_io_channel ...passed 00:07:46.313 Test: test_reset_io ...passed 00:07:46.313 Test: test_write_io ...passed 00:07:46.313 Test: test_read_io ...passed 00:07:47.249 Test: test_unmap_io ...passed 00:07:47.249 Test: test_io_failure ...[2024-04-17 12:51:51.362104] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:07:47.249 passed 00:07:47.249 Test: test_multi_raid_no_io ...passed 00:07:47.249 Test: test_multi_raid_with_io ...passed 00:07:47.249 Test: test_io_type_supported ...passed 00:07:47.249 Test: test_raid_json_dump_info ...passed 00:07:47.249 Test: test_context_size ...passed 00:07:47.249 Test: test_raid_level_conversions ...passed 00:07:47.249 Test: test_raid_io_split ...passedTest Options 00:07:47.249 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:07:47.249 00:07:47.249 Suite: raid_single_run 00:07:47.249 Test: test_raid_process ...passed 00:07:47.249 00:07:47.249 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.249 suites 3 3 n/a 0 0 00:07:47.249 tests 37 37 37 0 0 00:07:47.249 asserts 355354 355354 355354 0 n/a 00:07:47.249 00:07:47.249 Elapsed time = 2.013 seconds 00:07:47.508 12:51:51 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: raid_sb 00:07:47.509 Test: test_raid_bdev_write_superblock ...passed 00:07:47.509 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:47.509 Test: test_raid_bdev_parse_superblock ...[2024-04-17 12:51:51.416514] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 141:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:47.509 passed 00:07:47.509 00:07:47.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.509 suites 1 1 n/a 0 0 00:07:47.509 tests 3 3 3 0 0 00:07:47.509 asserts 32 32 32 0 n/a 00:07:47.509 00:07:47.509 Elapsed time = 0.001 seconds 00:07:47.509 12:51:51 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: concat 00:07:47.509 Test: test_concat_start ...passed 00:07:47.509 Test: test_concat_rw ...passed 00:07:47.509 Test: test_concat_null_payload ...passed 00:07:47.509 00:07:47.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.509 suites 1 1 n/a 0 0 00:07:47.509 tests 3 3 3 0 0 00:07:47.509 asserts 8097 8097 8097 0 n/a 00:07:47.509 00:07:47.509 Elapsed time = 0.007 seconds 00:07:47.509 12:51:51 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: raid1 00:07:47.509 Test: test_raid1_start ...passed 00:07:47.509 Test: test_raid1_read_balancing ...passed 00:07:47.509 00:07:47.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.509 suites 1 1 n/a 0 0 00:07:47.509 tests 2 2 2 0 0 00:07:47.509 asserts 2856 2856 2856 0 n/a 00:07:47.509 00:07:47.509 Elapsed time = 0.004 seconds 00:07:47.509 12:51:51 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: zone 00:07:47.509 Test: test_zone_get_operation ...passed 00:07:47.509 Test: test_bdev_zone_get_info ...passed 00:07:47.509 Test: test_bdev_zone_management ...passed 00:07:47.509 Test: test_bdev_zone_append ...passed 00:07:47.509 Test: test_bdev_zone_append_with_md ...passed 00:07:47.509 Test: test_bdev_zone_appendv ...passed 00:07:47.509 Test: test_bdev_zone_appendv_with_md ...passed 00:07:47.509 Test: test_bdev_io_get_append_location ...passed 00:07:47.509 00:07:47.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.509 suites 1 1 n/a 0 0 00:07:47.509 tests 8 8 8 0 0 00:07:47.509 asserts 94 94 94 0 n/a 00:07:47.509 00:07:47.509 Elapsed time = 0.001 seconds 00:07:47.509 12:51:51 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: gpt_parse 00:07:47.509 Test: test_parse_mbr_and_primary ...[2024-04-17 12:51:51.553262] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:47.509 [2024-04-17 12:51:51.553787] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:47.509 [2024-04-17 12:51:51.553863] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:47.509 [2024-04-17 12:51:51.553978] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:47.509 [2024-04-17 12:51:51.554074] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:47.509 [2024-04-17 12:51:51.554213] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:47.509 passed 00:07:47.509 Test: test_parse_secondary ...[2024-04-17 12:51:51.555266] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:47.509 [2024-04-17 12:51:51.555360] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:47.509 [2024-04-17 12:51:51.555408] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:47.509 [2024-04-17 12:51:51.555451] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:47.509 passed 00:07:47.509 Test: test_check_mbr ...[2024-04-17 12:51:51.556576] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:47.509 passed 00:07:47.509 Test: test_read_header ...[2024-04-17 12:51:51.556691] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:47.509 [2024-04-17 12:51:51.556776] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:47.509 [2024-04-17 12:51:51.556985] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:47.509 [2024-04-17 12:51:51.557122] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:47.509 [2024-04-17 12:51:51.557217] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:47.509 passed 00:07:47.509 Test: test_read_partitions ...[2024-04-17 12:51:51.557279] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:47.509 [2024-04-17 12:51:51.557330] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:47.509 [2024-04-17 12:51:51.557430] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:47.509 [2024-04-17 12:51:51.557497] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:47.509 [2024-04-17 12:51:51.557566] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:47.509 [2024-04-17 12:51:51.557605] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:47.509 [2024-04-17 12:51:51.558022] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:47.509 passed 00:07:47.509 00:07:47.509 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.509 suites 1 1 n/a 0 0 00:07:47.509 tests 5 5 5 0 0 00:07:47.509 asserts 33 33 33 0 n/a 00:07:47.509 00:07:47.509 Elapsed time = 0.006 seconds 00:07:47.509 12:51:51 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:47.509 00:07:47.509 00:07:47.509 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.509 http://cunit.sourceforge.net/ 00:07:47.509 00:07:47.509 00:07:47.509 Suite: bdev_part 00:07:47.509 Test: part_test ...[2024-04-17 12:51:51.596149] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:07:47.509 passed 00:07:47.509 Test: part_free_test ...passed 00:07:47.509 Test: part_get_io_channel_test ...passed 00:07:47.769 Test: part_construct_ext ...passed 00:07:47.769 00:07:47.769 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.769 suites 1 1 n/a 0 0 00:07:47.769 tests 4 4 4 0 0 00:07:47.769 asserts 48 48 48 0 n/a 00:07:47.769 00:07:47.769 Elapsed time = 0.056 seconds 00:07:47.769 12:51:51 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:47.769 00:07:47.769 00:07:47.769 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.769 http://cunit.sourceforge.net/ 00:07:47.769 00:07:47.769 00:07:47.769 Suite: scsi_nvme_suite 00:07:47.769 Test: scsi_nvme_translate_test ...passed 00:07:47.769 00:07:47.769 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.769 suites 1 1 n/a 0 0 00:07:47.769 tests 1 1 1 0 0 00:07:47.769 asserts 104 104 104 0 n/a 00:07:47.769 00:07:47.769 Elapsed time = 0.000 seconds 00:07:47.769 12:51:51 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:47.769 00:07:47.769 00:07:47.769 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.769 http://cunit.sourceforge.net/ 00:07:47.769 00:07:47.769 00:07:47.769 Suite: lvol 00:07:47.769 Test: ut_lvs_init ...[2024-04-17 12:51:51.716626] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:47.769 [2024-04-17 12:51:51.717606] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:47.769 passed 00:07:47.770 Test: ut_lvol_init ...passed 00:07:47.770 Test: ut_lvol_snapshot ...passed 00:07:47.770 Test: ut_lvol_clone ...passed 00:07:47.770 Test: ut_lvs_destroy ...passed 00:07:47.770 Test: ut_lvs_unload ...passed 00:07:47.770 Test: ut_lvol_resize ...[2024-04-17 12:51:51.720167] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:47.770 passed 00:07:47.770 Test: ut_lvol_set_read_only ...passed 00:07:47.770 Test: ut_lvol_hotremove ...passed 00:07:47.770 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:47.770 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:47.770 Test: ut_lvol_read_write ...passed 00:07:47.770 Test: ut_vbdev_lvol_submit_request ...passed 00:07:47.770 Test: ut_lvol_examine_config ...passed 00:07:47.770 Test: ut_lvol_examine_disk ...[2024-04-17 12:51:51.722040] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:47.770 passed 00:07:47.770 Test: ut_lvol_rename ...[2024-04-17 12:51:51.723199] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:47.770 [2024-04-17 12:51:51.723543] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:47.770 passed 00:07:47.770 Test: ut_bdev_finish ...passed 00:07:47.770 Test: ut_lvs_rename ...passed 00:07:47.770 Test: ut_lvol_seek ...passed 00:07:47.770 Test: ut_esnap_dev_create ...[2024-04-17 12:51:51.724943] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:47.770 [2024-04-17 12:51:51.725256] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:47.770 [2024-04-17 12:51:51.725398] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:47.770 [2024-04-17 12:51:51.725685] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:07:47.770 passed 00:07:47.770 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-17 12:51:51.726067] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:47.770 [2024-04-17 12:51:51.726320] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:47.770 passed 00:07:47.770 00:07:47.770 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.770 suites 1 1 n/a 0 0 00:07:47.770 tests 21 21 21 0 0 00:07:47.770 asserts 758 758 758 0 n/a 00:07:47.770 00:07:47.770 Elapsed time = 0.010 seconds 00:07:47.770 12:51:51 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:47.770 00:07:47.770 00:07:47.770 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.770 http://cunit.sourceforge.net/ 00:07:47.770 00:07:47.770 00:07:47.770 Suite: zone_block 00:07:47.770 Test: test_zone_block_create ...passed 00:07:47.770 Test: test_zone_block_create_invalid ...[2024-04-17 12:51:51.780277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:47.770 [2024-04-17 12:51:51.780629] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-17 12:51:51.780907] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:47.770 [2024-04-17 12:51:51.780986] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-17 12:51:51.781163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:47.770 [2024-04-17 12:51:51.781218] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-17 12:51:51.781336] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:47.770 [2024-04-17 12:51:51.781409] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:07:47.770 Test: test_get_zone_info ...[2024-04-17 12:51:51.781959] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.782044] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.782142] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_supported_io_types ...passed 00:07:47.770 Test: test_reset_zone ...[2024-04-17 12:51:51.782992] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.783085] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_open_zone ...[2024-04-17 12:51:51.783551] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.784206] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.784312] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_zone_write ...[2024-04-17 12:51:51.784795] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:47.770 [2024-04-17 12:51:51.784873] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.784945] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:47.770 [2024-04-17 12:51:51.785005] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.789851] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:47.770 [2024-04-17 12:51:51.789906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.790007] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:47.770 [2024-04-17 12:51:51.790044] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.794956] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:47.770 [2024-04-17 12:51:51.795040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_zone_read ...[2024-04-17 12:51:51.795557] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:47.770 [2024-04-17 12:51:51.795609] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.795701] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:47.770 [2024-04-17 12:51:51.795755] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.796201] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:47.770 [2024-04-17 12:51:51.796254] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_close_zone ...[2024-04-17 12:51:51.796657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.796760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.796997] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.797072] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_finish_zone ...[2024-04-17 12:51:51.797725] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.797806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 passed 00:07:47.770 Test: test_append_zone ...[2024-04-17 12:51:51.798217] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:47.770 [2024-04-17 12:51:51.798273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.798350] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:47.770 [2024-04-17 12:51:51.798396] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 [2024-04-17 12:51:51.808089] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:47.770 passed 00:07:47.770 00:07:47.770 [2024-04-17 12:51:51.808170] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:47.770 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.770 suites 1 1 n/a 0 0 00:07:47.770 tests 11 11 11 0 0 00:07:47.770 asserts 3437 3437 3437 0 n/a 00:07:47.770 00:07:47.770 Elapsed time = 0.029 seconds 00:07:47.770 12:51:51 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:47.770 00:07:47.770 00:07:47.770 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.770 http://cunit.sourceforge.net/ 00:07:47.770 00:07:47.770 00:07:47.770 Suite: bdev 00:07:48.029 Test: basic ...[2024-04-17 12:51:51.913271] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d37918d401): Operation not permitted (rc=-1) 00:07:48.029 [2024-04-17 12:51:51.913728] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55d37918d3c0): Operation not permitted (rc=-1) 00:07:48.029 [2024-04-17 12:51:51.913792] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d37918d401): Operation not permitted (rc=-1) 00:07:48.029 passed 00:07:48.029 Test: unregister_and_close ...passed 00:07:48.029 Test: unregister_and_close_different_threads ...passed 00:07:48.029 Test: basic_qos ...passed 00:07:48.287 Test: put_channel_during_reset ...passed 00:07:48.287 Test: aborted_reset ...passed 00:07:48.287 Test: aborted_reset_no_outstanding_io ...passed 00:07:48.287 Test: io_during_reset ...passed 00:07:48.287 Test: reset_completions ...passed 00:07:48.287 Test: io_during_qos_queue ...passed 00:07:48.546 Test: io_during_qos_reset ...passed 00:07:48.546 Test: enomem ...passed 00:07:48.546 Test: enomem_multi_bdev ...passed 00:07:48.546 Test: enomem_multi_bdev_unregister ...passed 00:07:48.546 Test: enomem_multi_io_target ...passed 00:07:48.546 Test: qos_dynamic_enable ...passed 00:07:48.806 Test: bdev_histograms_mt ...passed 00:07:48.806 Test: bdev_set_io_timeout_mt ...[2024-04-17 12:51:52.785748] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:48.806 passed 00:07:48.806 Test: lock_lba_range_then_submit_io ...[2024-04-17 12:51:52.809870] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55d37918d380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:48.806 passed 00:07:48.806 Test: unregister_during_reset ...passed 00:07:48.806 Test: event_notify_and_close ...passed 00:07:48.806 Suite: bdev_wrong_thread 00:07:48.806 Test: spdk_bdev_register_wt ...[2024-04-17 12:51:52.924314] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8412:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:07:48.806 passed 00:07:48.806 Test: spdk_bdev_examine_wt ...[2024-04-17 12:51:52.924778] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 791:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:07:48.806 passed 00:07:48.806 00:07:48.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.806 suites 2 2 n/a 0 0 00:07:48.806 tests 23 23 23 0 0 00:07:48.806 asserts 601 601 601 0 n/a 00:07:48.806 00:07:48.806 Elapsed time = 1.041 seconds 00:07:49.064 00:07:49.064 real 0m5.460s 00:07:49.064 user 0m2.351s 00:07:49.064 sys 0m3.076s 00:07:49.064 12:51:52 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:07:49.064 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.064 ************************************ 00:07:49.064 END TEST unittest_bdev 00:07:49.064 ************************************ 00:07:49.064 12:51:52 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.064 12:51:52 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.064 12:51:52 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.064 12:51:52 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.064 12:51:52 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:49.064 12:51:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:49.064 12:51:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:49.064 12:51:52 -- common/autotest_common.sh@10 -- # set +x 00:07:49.064 ************************************ 00:07:49.064 START TEST unittest_bdev_raid5f 00:07:49.064 ************************************ 00:07:49.064 12:51:53 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:49.064 00:07:49.064 00:07:49.064 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.064 http://cunit.sourceforge.net/ 00:07:49.064 00:07:49.064 00:07:49.064 Suite: raid5f 00:07:49.064 Test: test_raid5f_start ...passed 00:07:49.631 Test: test_raid5f_submit_read_request ...passed 00:07:49.631 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:53.817 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:08:11.938 Test: test_raid5f_chunk_write_error ...passed 00:08:20.049 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:08:21.948 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:54.124 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:54.124 00:08:54.124 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.124 suites 1 1 n/a 0 0 00:08:54.124 tests 8 8 8 0 0 00:08:54.124 asserts 351864 351864 351864 0 n/a 00:08:54.124 00:08:54.124 Elapsed time = 60.843 seconds 00:08:54.124 00:08:54.124 real 1m0.948s 00:08:54.124 user 0m57.850s 00:08:54.124 sys 0m3.066s 00:08:54.124 ************************************ 00:08:54.124 END TEST unittest_bdev_raid5f 00:08:54.124 ************************************ 00:08:54.124 12:52:53 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:08:54.124 12:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:54.124 12:52:54 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:08:54.124 12:52:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:54.124 12:52:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:54.124 12:52:54 -- common/autotest_common.sh@10 -- # set +x 00:08:54.124 ************************************ 00:08:54.124 START TEST unittest_blob_blobfs 00:08:54.124 ************************************ 00:08:54.124 12:52:54 -- common/autotest_common.sh@1099 -- # unittest_blob 00:08:54.124 12:52:54 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:54.124 12:52:54 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:54.124 00:08:54.124 00:08:54.124 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.124 http://cunit.sourceforge.net/ 00:08:54.124 00:08:54.124 00:08:54.124 Suite: blob_nocopy_noextent 00:08:54.124 Test: blob_init ...[2024-04-17 12:52:54.102071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:54.124 passed 00:08:54.124 Test: blob_thin_provision ...passed 00:08:54.124 Test: blob_read_only ...passed 00:08:54.124 Test: bs_load ...[2024-04-17 12:52:54.214703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:54.124 passed 00:08:54.124 Test: bs_load_custom_cluster_size ...passed 00:08:54.124 Test: bs_load_after_failed_grow ...passed 00:08:54.124 Test: bs_cluster_sz ...[2024-04-17 12:52:54.254294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:54.124 [2024-04-17 12:52:54.254914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:54.124 [2024-04-17 12:52:54.255270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:54.124 passed 00:08:54.124 Test: bs_resize_md ...passed 00:08:54.124 Test: bs_destroy ...passed 00:08:54.124 Test: bs_type ...passed 00:08:54.124 Test: bs_super_block ...passed 00:08:54.124 Test: bs_test_recover_cluster_count ...passed 00:08:54.124 Test: bs_grow_live ...passed 00:08:54.124 Test: bs_grow_live_no_space ...passed 00:08:54.124 Test: bs_test_grow ...passed 00:08:54.124 Test: blob_serialize_test ...passed 00:08:54.124 Test: super_block_crc ...passed 00:08:54.124 Test: blob_thin_prov_write_count_io ...passed 00:08:54.124 Test: blob_thin_prov_unmap_cluster ...passed 00:08:54.124 Test: bs_load_iter_test ...passed 00:08:54.124 Test: blob_relations ...[2024-04-17 12:52:54.489227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.124 [2024-04-17 12:52:54.489553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.124 [2024-04-17 12:52:54.490587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.124 [2024-04-17 12:52:54.490786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.124 passed 00:08:54.125 Test: blob_relations2 ...[2024-04-17 12:52:54.507645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.125 [2024-04-17 12:52:54.508034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.508108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.125 [2024-04-17 12:52:54.508348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.509969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.125 [2024-04-17 12:52:54.510191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.510701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.125 [2024-04-17 12:52:54.510895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 passed 00:08:54.125 Test: blob_relations3 ...passed 00:08:54.125 Test: blobstore_clean_power_failure ...passed 00:08:54.125 Test: blob_delete_snapshot_power_failure ...[2024-04-17 12:52:54.698819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:54.125 [2024-04-17 12:52:54.714045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:54.125 [2024-04-17 12:52:54.714352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:54.125 [2024-04-17 12:52:54.714459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.728894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:54.125 [2024-04-17 12:52:54.729284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:54.125 [2024-04-17 12:52:54.729364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:54.125 [2024-04-17 12:52:54.729527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.744384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:54.125 [2024-04-17 12:52:54.744786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.759522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:54.125 [2024-04-17 12:52:54.760001] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:54.774878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:54.125 [2024-04-17 12:52:54.775283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 passed 00:08:54.125 Test: blob_create_snapshot_power_failure ...[2024-04-17 12:52:54.824040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:54.125 [2024-04-17 12:52:54.853011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:54.125 [2024-04-17 12:52:54.867413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:54.125 passed 00:08:54.125 Test: blob_io_unit ...passed 00:08:54.125 Test: blob_io_unit_compatibility ...passed 00:08:54.125 Test: blob_ext_md_pages ...passed 00:08:54.125 Test: blob_esnap_io_4096_4096 ...passed 00:08:54.125 Test: blob_esnap_io_512_512 ...passed 00:08:54.125 Test: blob_esnap_io_4096_512 ...passed 00:08:54.125 Test: blob_esnap_io_512_4096 ...passed 00:08:54.125 Suite: blob_bs_nocopy_noextent 00:08:54.125 Test: blob_open ...passed 00:08:54.125 Test: blob_create ...[2024-04-17 12:52:55.150845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:54.125 passed 00:08:54.125 Test: blob_create_loop ...passed 00:08:54.125 Test: blob_create_fail ...[2024-04-17 12:52:55.264816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:54.125 passed 00:08:54.125 Test: blob_create_internal ...passed 00:08:54.125 Test: blob_create_zero_extent ...passed 00:08:54.125 Test: blob_snapshot ...passed 00:08:54.125 Test: blob_clone ...passed 00:08:54.125 Test: blob_inflate ...[2024-04-17 12:52:55.477984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:54.125 passed 00:08:54.125 Test: blob_delete ...passed 00:08:54.125 Test: blob_resize_test ...[2024-04-17 12:52:55.557558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:54.125 passed 00:08:54.125 Test: channel_ops ...passed 00:08:54.125 Test: blob_super ...passed 00:08:54.125 Test: blob_rw_verify_iov ...passed 00:08:54.125 Test: blob_unmap ...passed 00:08:54.125 Test: blob_iter ...passed 00:08:54.125 Test: blob_parse_md ...passed 00:08:54.125 Test: bs_load_pending_removal ...passed 00:08:54.125 Test: bs_unload ...[2024-04-17 12:52:55.900965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:54.125 passed 00:08:54.125 Test: bs_usable_clusters ...passed 00:08:54.125 Test: blob_crc ...[2024-04-17 12:52:55.984759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:54.125 [2024-04-17 12:52:55.985039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:54.125 passed 00:08:54.125 Test: blob_flags ...passed 00:08:54.125 Test: bs_version ...passed 00:08:54.125 Test: blob_set_xattrs_test ...[2024-04-17 12:52:56.107158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:54.125 [2024-04-17 12:52:56.107520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:54.125 passed 00:08:54.125 Test: blob_thin_prov_alloc ...passed 00:08:54.125 Test: blob_insert_cluster_msg_test ...passed 00:08:54.125 Test: blob_thin_prov_rw ...passed 00:08:54.125 Test: blob_thin_prov_rle ...passed 00:08:54.125 Test: blob_thin_prov_rw_iov ...passed 00:08:54.125 Test: blob_snapshot_rw ...passed 00:08:54.125 Test: blob_snapshot_rw_iov ...passed 00:08:54.125 Test: blob_inflate_rw ...passed 00:08:54.125 Test: blob_snapshot_freeze_io ...passed 00:08:54.125 Test: blob_operation_split_rw ...passed 00:08:54.125 Test: blob_operation_split_rw_iov ...passed 00:08:54.125 Test: blob_simultaneous_operations ...[2024-04-17 12:52:57.164162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.125 [2024-04-17 12:52:57.164485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:57.165791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.125 [2024-04-17 12:52:57.165980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:57.177673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.125 [2024-04-17 12:52:57.177996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 [2024-04-17 12:52:57.178192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:54.125 [2024-04-17 12:52:57.178362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.125 passed 00:08:54.125 Test: blob_persist_test ...passed 00:08:54.125 Test: blob_decouple_snapshot ...passed 00:08:54.125 Test: blob_seek_io_unit ...passed 00:08:54.125 Test: blob_nested_freezes ...passed 00:08:54.125 Suite: blob_blob_nocopy_noextent 00:08:54.125 Test: blob_write ...passed 00:08:54.125 Test: blob_read ...passed 00:08:54.125 Test: blob_rw_verify ...passed 00:08:54.125 Test: blob_rw_verify_iov_nomem ...passed 00:08:54.125 Test: blob_rw_iov_read_only ...passed 00:08:54.125 Test: blob_xattr ...passed 00:08:54.125 Test: blob_dirty_shutdown ...passed 00:08:54.125 Test: blob_is_degraded ...passed 00:08:54.125 Suite: blob_esnap_bs_nocopy_noextent 00:08:54.125 Test: blob_esnap_create ...passed 00:08:54.125 Test: blob_esnap_thread_add_remove ...passed 00:08:54.125 Test: blob_esnap_clone_snapshot ...passed 00:08:54.125 Test: blob_esnap_clone_inflate ...passed 00:08:54.125 Test: blob_esnap_clone_decouple ...passed 00:08:54.125 Test: blob_esnap_clone_reload ...passed 00:08:54.125 Test: blob_esnap_hotplug ...passed 00:08:54.125 Suite: blob_nocopy_extent 00:08:54.125 Test: blob_init ...[2024-04-17 12:52:58.009471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:54.125 passed 00:08:54.126 Test: blob_thin_provision ...passed 00:08:54.126 Test: blob_read_only ...passed 00:08:54.126 Test: bs_load ...[2024-04-17 12:52:58.066053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:54.126 passed 00:08:54.126 Test: bs_load_custom_cluster_size ...passed 00:08:54.126 Test: bs_load_after_failed_grow ...passed 00:08:54.126 Test: bs_cluster_sz ...[2024-04-17 12:52:58.097113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:54.126 [2024-04-17 12:52:58.097459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:54.126 [2024-04-17 12:52:58.097659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:54.126 passed 00:08:54.126 Test: bs_resize_md ...passed 00:08:54.126 Test: bs_destroy ...passed 00:08:54.126 Test: bs_type ...passed 00:08:54.126 Test: bs_super_block ...passed 00:08:54.126 Test: bs_test_recover_cluster_count ...passed 00:08:54.126 Test: bs_grow_live ...passed 00:08:54.126 Test: bs_grow_live_no_space ...passed 00:08:54.126 Test: bs_test_grow ...passed 00:08:54.126 Test: blob_serialize_test ...passed 00:08:54.126 Test: super_block_crc ...passed 00:08:54.126 Test: blob_thin_prov_write_count_io ...passed 00:08:54.384 Test: blob_thin_prov_unmap_cluster ...passed 00:08:54.384 Test: bs_load_iter_test ...passed 00:08:54.384 Test: blob_relations ...[2024-04-17 12:52:58.306873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.384 [2024-04-17 12:52:58.307169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 [2024-04-17 12:52:58.308208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.385 [2024-04-17 12:52:58.308375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 passed 00:08:54.385 Test: blob_relations2 ...[2024-04-17 12:52:58.324893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.385 [2024-04-17 12:52:58.325216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 [2024-04-17 12:52:58.325298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.385 [2024-04-17 12:52:58.325556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 [2024-04-17 12:52:58.327023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.385 [2024-04-17 12:52:58.327214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 [2024-04-17 12:52:58.327762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:54.385 [2024-04-17 12:52:58.327943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.385 passed 00:08:54.385 Test: blob_relations3 ...passed 00:08:54.385 Test: blobstore_clean_power_failure ...passed 00:08:54.385 Test: blob_delete_snapshot_power_failure ...[2024-04-17 12:52:58.519031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:54.644 [2024-04-17 12:52:58.534394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:54.644 [2024-04-17 12:52:58.549451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:54.644 [2024-04-17 12:52:58.549833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:54.644 [2024-04-17 12:52:58.549909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 [2024-04-17 12:52:58.564309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:54.644 [2024-04-17 12:52:58.564533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:54.644 [2024-04-17 12:52:58.564607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:54.644 [2024-04-17 12:52:58.564879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 [2024-04-17 12:52:58.579169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:54.644 [2024-04-17 12:52:58.579483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:54.644 [2024-04-17 12:52:58.579559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:54.644 [2024-04-17 12:52:58.579687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 [2024-04-17 12:52:58.594420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:54.644 [2024-04-17 12:52:58.594835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 [2024-04-17 12:52:58.610120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:54.644 [2024-04-17 12:52:58.610437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 [2024-04-17 12:52:58.626229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:54.644 [2024-04-17 12:52:58.626585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.644 passed 00:08:54.644 Test: blob_create_snapshot_power_failure ...[2024-04-17 12:52:58.672381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:54.644 [2024-04-17 12:52:58.688144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:54.644 [2024-04-17 12:52:58.718535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:54.644 [2024-04-17 12:52:58.733645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:54.644 passed 00:08:54.902 Test: blob_io_unit ...passed 00:08:54.902 Test: blob_io_unit_compatibility ...passed 00:08:54.902 Test: blob_ext_md_pages ...passed 00:08:54.902 Test: blob_esnap_io_4096_4096 ...passed 00:08:54.902 Test: blob_esnap_io_512_512 ...passed 00:08:54.902 Test: blob_esnap_io_4096_512 ...passed 00:08:54.902 Test: blob_esnap_io_512_4096 ...passed 00:08:54.902 Suite: blob_bs_nocopy_extent 00:08:54.902 Test: blob_open ...passed 00:08:54.902 Test: blob_create ...[2024-04-17 12:52:59.039375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:55.160 passed 00:08:55.160 Test: blob_create_loop ...passed 00:08:55.160 Test: blob_create_fail ...[2024-04-17 12:52:59.162102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:55.160 passed 00:08:55.160 Test: blob_create_internal ...passed 00:08:55.160 Test: blob_create_zero_extent ...passed 00:08:55.160 Test: blob_snapshot ...passed 00:08:55.419 Test: blob_clone ...passed 00:08:55.419 Test: blob_inflate ...[2024-04-17 12:52:59.373670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:55.419 passed 00:08:55.419 Test: blob_delete ...passed 00:08:55.419 Test: blob_resize_test ...[2024-04-17 12:52:59.450181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:55.419 passed 00:08:55.419 Test: channel_ops ...passed 00:08:55.419 Test: blob_super ...passed 00:08:55.677 Test: blob_rw_verify_iov ...passed 00:08:55.677 Test: blob_unmap ...passed 00:08:55.677 Test: blob_iter ...passed 00:08:55.677 Test: blob_parse_md ...passed 00:08:55.677 Test: bs_load_pending_removal ...passed 00:08:55.677 Test: bs_unload ...[2024-04-17 12:52:59.768761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:55.677 passed 00:08:55.936 Test: bs_usable_clusters ...passed 00:08:55.936 Test: blob_crc ...[2024-04-17 12:52:59.849918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:55.936 [2024-04-17 12:52:59.850322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:55.936 passed 00:08:55.936 Test: blob_flags ...passed 00:08:55.936 Test: bs_version ...passed 00:08:55.936 Test: blob_set_xattrs_test ...[2024-04-17 12:52:59.973700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:55.936 [2024-04-17 12:52:59.974013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:55.936 passed 00:08:56.193 Test: blob_thin_prov_alloc ...passed 00:08:56.193 Test: blob_insert_cluster_msg_test ...passed 00:08:56.193 Test: blob_thin_prov_rw ...passed 00:08:56.193 Test: blob_thin_prov_rle ...passed 00:08:56.193 Test: blob_thin_prov_rw_iov ...passed 00:08:56.193 Test: blob_snapshot_rw ...passed 00:08:56.451 Test: blob_snapshot_rw_iov ...passed 00:08:56.709 Test: blob_inflate_rw ...passed 00:08:56.709 Test: blob_snapshot_freeze_io ...passed 00:08:56.709 Test: blob_operation_split_rw ...passed 00:08:56.967 Test: blob_operation_split_rw_iov ...passed 00:08:56.967 Test: blob_simultaneous_operations ...[2024-04-17 12:53:01.014389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:56.967 [2024-04-17 12:53:01.014781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.967 [2024-04-17 12:53:01.016000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:56.967 [2024-04-17 12:53:01.016195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.967 [2024-04-17 12:53:01.029440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:56.967 [2024-04-17 12:53:01.029702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.967 [2024-04-17 12:53:01.029869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:56.967 [2024-04-17 12:53:01.030050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:56.967 passed 00:08:56.967 Test: blob_persist_test ...passed 00:08:57.224 Test: blob_decouple_snapshot ...passed 00:08:57.224 Test: blob_seek_io_unit ...passed 00:08:57.224 Test: blob_nested_freezes ...passed 00:08:57.224 Suite: blob_blob_nocopy_extent 00:08:57.224 Test: blob_write ...passed 00:08:57.224 Test: blob_read ...passed 00:08:57.481 Test: blob_rw_verify ...passed 00:08:57.481 Test: blob_rw_verify_iov_nomem ...passed 00:08:57.481 Test: blob_rw_iov_read_only ...passed 00:08:57.481 Test: blob_xattr ...passed 00:08:57.481 Test: blob_dirty_shutdown ...passed 00:08:57.481 Test: blob_is_degraded ...passed 00:08:57.481 Suite: blob_esnap_bs_nocopy_extent 00:08:57.774 Test: blob_esnap_create ...passed 00:08:57.774 Test: blob_esnap_thread_add_remove ...passed 00:08:57.774 Test: blob_esnap_clone_snapshot ...passed 00:08:57.774 Test: blob_esnap_clone_inflate ...passed 00:08:57.774 Test: blob_esnap_clone_decouple ...passed 00:08:57.774 Test: blob_esnap_clone_reload ...passed 00:08:57.774 Test: blob_esnap_hotplug ...passed 00:08:57.774 Suite: blob_copy_noextent 00:08:57.774 Test: blob_init ...[2024-04-17 12:53:01.873082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:57.774 passed 00:08:58.033 Test: blob_thin_provision ...passed 00:08:58.033 Test: blob_read_only ...passed 00:08:58.033 Test: bs_load ...[2024-04-17 12:53:01.927534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:58.033 passed 00:08:58.033 Test: bs_load_custom_cluster_size ...passed 00:08:58.033 Test: bs_load_after_failed_grow ...passed 00:08:58.033 Test: bs_cluster_sz ...[2024-04-17 12:53:01.957954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:58.033 [2024-04-17 12:53:01.958220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:58.033 [2024-04-17 12:53:01.958375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:58.033 passed 00:08:58.033 Test: bs_resize_md ...passed 00:08:58.033 Test: bs_destroy ...passed 00:08:58.033 Test: bs_type ...passed 00:08:58.033 Test: bs_super_block ...passed 00:08:58.033 Test: bs_test_recover_cluster_count ...passed 00:08:58.033 Test: bs_grow_live ...passed 00:08:58.033 Test: bs_grow_live_no_space ...passed 00:08:58.033 Test: bs_test_grow ...passed 00:08:58.033 Test: blob_serialize_test ...passed 00:08:58.033 Test: super_block_crc ...passed 00:08:58.033 Test: blob_thin_prov_write_count_io ...passed 00:08:58.033 Test: blob_thin_prov_unmap_cluster ...passed 00:08:58.033 Test: bs_load_iter_test ...passed 00:08:58.033 Test: blob_relations ...[2024-04-17 12:53:02.168490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.033 [2024-04-17 12:53:02.168798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.033 [2024-04-17 12:53:02.169423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.033 [2024-04-17 12:53:02.169559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.033 passed 00:08:58.291 Test: blob_relations2 ...[2024-04-17 12:53:02.185775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.291 [2024-04-17 12:53:02.186083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 [2024-04-17 12:53:02.186149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.291 [2024-04-17 12:53:02.186261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 [2024-04-17 12:53:02.187346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.291 [2024-04-17 12:53:02.187511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 [2024-04-17 12:53:02.187875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:58.291 [2024-04-17 12:53:02.188031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 passed 00:08:58.291 Test: blob_relations3 ...passed 00:08:58.291 Test: blobstore_clean_power_failure ...passed 00:08:58.291 Test: blob_delete_snapshot_power_failure ...[2024-04-17 12:53:02.374560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:58.291 [2024-04-17 12:53:02.392414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:58.291 [2024-04-17 12:53:02.392708] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:58.291 [2024-04-17 12:53:02.392772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 [2024-04-17 12:53:02.406741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:58.291 [2024-04-17 12:53:02.406991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:58.291 [2024-04-17 12:53:02.407050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:58.291 [2024-04-17 12:53:02.407185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.291 [2024-04-17 12:53:02.421847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:58.291 [2024-04-17 12:53:02.422204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.549 [2024-04-17 12:53:02.436920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:58.549 [2024-04-17 12:53:02.437190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.549 [2024-04-17 12:53:02.451863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:58.549 [2024-04-17 12:53:02.452154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:58.549 passed 00:08:58.549 Test: blob_create_snapshot_power_failure ...[2024-04-17 12:53:02.495848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:58.549 [2024-04-17 12:53:02.524798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:58.549 [2024-04-17 12:53:02.539314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:58.549 passed 00:08:58.549 Test: blob_io_unit ...passed 00:08:58.549 Test: blob_io_unit_compatibility ...passed 00:08:58.549 Test: blob_ext_md_pages ...passed 00:08:58.549 Test: blob_esnap_io_4096_4096 ...passed 00:08:58.807 Test: blob_esnap_io_512_512 ...passed 00:08:58.807 Test: blob_esnap_io_4096_512 ...passed 00:08:58.807 Test: blob_esnap_io_512_4096 ...passed 00:08:58.807 Suite: blob_bs_copy_noextent 00:08:58.807 Test: blob_open ...passed 00:08:58.807 Test: blob_create ...[2024-04-17 12:53:02.818030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:58.807 passed 00:08:58.807 Test: blob_create_loop ...passed 00:08:58.807 Test: blob_create_fail ...[2024-04-17 12:53:02.923483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:58.807 passed 00:08:59.065 Test: blob_create_internal ...passed 00:08:59.065 Test: blob_create_zero_extent ...passed 00:08:59.065 Test: blob_snapshot ...passed 00:08:59.065 Test: blob_clone ...passed 00:08:59.065 Test: blob_inflate ...[2024-04-17 12:53:03.128617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:59.065 passed 00:08:59.065 Test: blob_delete ...passed 00:08:59.323 Test: blob_resize_test ...[2024-04-17 12:53:03.212823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:59.323 passed 00:08:59.323 Test: channel_ops ...passed 00:08:59.323 Test: blob_super ...passed 00:08:59.323 Test: blob_rw_verify_iov ...passed 00:08:59.323 Test: blob_unmap ...passed 00:08:59.323 Test: blob_iter ...passed 00:08:59.582 Test: blob_parse_md ...passed 00:08:59.582 Test: bs_load_pending_removal ...passed 00:08:59.582 Test: bs_unload ...[2024-04-17 12:53:03.546879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:59.582 passed 00:08:59.582 Test: bs_usable_clusters ...passed 00:08:59.582 Test: blob_crc ...[2024-04-17 12:53:03.633481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:59.582 [2024-04-17 12:53:03.633872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:59.582 passed 00:08:59.582 Test: blob_flags ...passed 00:08:59.840 Test: bs_version ...passed 00:08:59.840 Test: blob_set_xattrs_test ...[2024-04-17 12:53:03.760554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:59.840 [2024-04-17 12:53:03.760888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:59.840 passed 00:08:59.840 Test: blob_thin_prov_alloc ...passed 00:09:00.098 Test: blob_insert_cluster_msg_test ...passed 00:09:00.098 Test: blob_thin_prov_rw ...passed 00:09:00.098 Test: blob_thin_prov_rle ...passed 00:09:00.098 Test: blob_thin_prov_rw_iov ...passed 00:09:00.098 Test: blob_snapshot_rw ...passed 00:09:00.098 Test: blob_snapshot_rw_iov ...passed 00:09:00.356 Test: blob_inflate_rw ...passed 00:09:00.614 Test: blob_snapshot_freeze_io ...passed 00:09:00.614 Test: blob_operation_split_rw ...passed 00:09:00.874 Test: blob_operation_split_rw_iov ...passed 00:09:00.874 Test: blob_simultaneous_operations ...[2024-04-17 12:53:04.856740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:00.874 [2024-04-17 12:53:04.857291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:00.874 [2024-04-17 12:53:04.858137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:00.874 [2024-04-17 12:53:04.858329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:00.874 [2024-04-17 12:53:04.861795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:00.874 [2024-04-17 12:53:04.862030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:00.874 [2024-04-17 12:53:04.862212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:00.874 [2024-04-17 12:53:04.862390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:00.874 passed 00:09:00.874 Test: blob_persist_test ...passed 00:09:00.874 Test: blob_decouple_snapshot ...passed 00:09:01.133 Test: blob_seek_io_unit ...passed 00:09:01.133 Test: blob_nested_freezes ...passed 00:09:01.133 Suite: blob_blob_copy_noextent 00:09:01.133 Test: blob_write ...passed 00:09:01.133 Test: blob_read ...passed 00:09:01.133 Test: blob_rw_verify ...passed 00:09:01.133 Test: blob_rw_verify_iov_nomem ...passed 00:09:01.391 Test: blob_rw_iov_read_only ...passed 00:09:01.391 Test: blob_xattr ...passed 00:09:01.391 Test: blob_dirty_shutdown ...passed 00:09:01.391 Test: blob_is_degraded ...passed 00:09:01.391 Suite: blob_esnap_bs_copy_noextent 00:09:01.391 Test: blob_esnap_create ...passed 00:09:01.391 Test: blob_esnap_thread_add_remove ...passed 00:09:01.391 Test: blob_esnap_clone_snapshot ...passed 00:09:01.649 Test: blob_esnap_clone_inflate ...passed 00:09:01.650 Test: blob_esnap_clone_decouple ...passed 00:09:01.650 Test: blob_esnap_clone_reload ...passed 00:09:01.650 Test: blob_esnap_hotplug ...passed 00:09:01.650 Suite: blob_copy_extent 00:09:01.650 Test: blob_init ...[2024-04-17 12:53:05.681213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:01.650 passed 00:09:01.650 Test: blob_thin_provision ...passed 00:09:01.650 Test: blob_read_only ...passed 00:09:01.650 Test: bs_load ...[2024-04-17 12:53:05.736348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:01.650 passed 00:09:01.650 Test: bs_load_custom_cluster_size ...passed 00:09:01.650 Test: bs_load_after_failed_grow ...passed 00:09:01.650 Test: bs_cluster_sz ...[2024-04-17 12:53:05.765164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:01.650 [2024-04-17 12:53:05.765419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:01.650 [2024-04-17 12:53:05.765561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:01.650 passed 00:09:01.908 Test: bs_resize_md ...passed 00:09:01.908 Test: bs_destroy ...passed 00:09:01.908 Test: bs_type ...passed 00:09:01.908 Test: bs_super_block ...passed 00:09:01.908 Test: bs_test_recover_cluster_count ...passed 00:09:01.908 Test: bs_grow_live ...passed 00:09:01.908 Test: bs_grow_live_no_space ...passed 00:09:01.908 Test: bs_test_grow ...passed 00:09:01.908 Test: blob_serialize_test ...passed 00:09:01.908 Test: super_block_crc ...passed 00:09:01.908 Test: blob_thin_prov_write_count_io ...passed 00:09:01.908 Test: blob_thin_prov_unmap_cluster ...passed 00:09:01.908 Test: bs_load_iter_test ...passed 00:09:01.908 Test: blob_relations ...[2024-04-17 12:53:05.967751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.968150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 [2024-04-17 12:53:05.968943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.969157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 passed 00:09:01.908 Test: blob_relations2 ...[2024-04-17 12:53:05.984398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.984787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 [2024-04-17 12:53:05.984964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.985066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 [2024-04-17 12:53:05.986123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.986281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 [2024-04-17 12:53:05.986797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:01.908 [2024-04-17 12:53:05.986987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:01.908 passed 00:09:01.908 Test: blob_relations3 ...passed 00:09:02.167 Test: blobstore_clean_power_failure ...passed 00:09:02.167 Test: blob_delete_snapshot_power_failure ...[2024-04-17 12:53:06.173925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:02.167 [2024-04-17 12:53:06.188154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:02.167 [2024-04-17 12:53:06.202484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:02.167 [2024-04-17 12:53:06.202803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:02.167 [2024-04-17 12:53:06.202867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 [2024-04-17 12:53:06.217347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:02.167 [2024-04-17 12:53:06.217650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:02.167 [2024-04-17 12:53:06.217722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:02.167 [2024-04-17 12:53:06.217834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 [2024-04-17 12:53:06.232196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:02.167 [2024-04-17 12:53:06.232445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:02.167 [2024-04-17 12:53:06.232515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:02.167 [2024-04-17 12:53:06.232618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 [2024-04-17 12:53:06.246701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:02.167 [2024-04-17 12:53:06.247032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 [2024-04-17 12:53:06.261237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:02.167 [2024-04-17 12:53:06.261517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 [2024-04-17 12:53:06.276402] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:02.167 [2024-04-17 12:53:06.276744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.167 passed 00:09:02.427 Test: blob_create_snapshot_power_failure ...[2024-04-17 12:53:06.320192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:02.427 [2024-04-17 12:53:06.334371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:02.427 [2024-04-17 12:53:06.362966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:02.427 [2024-04-17 12:53:06.377761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:02.427 passed 00:09:02.427 Test: blob_io_unit ...passed 00:09:02.427 Test: blob_io_unit_compatibility ...passed 00:09:02.427 Test: blob_ext_md_pages ...passed 00:09:02.427 Test: blob_esnap_io_4096_4096 ...passed 00:09:02.427 Test: blob_esnap_io_512_512 ...passed 00:09:02.427 Test: blob_esnap_io_4096_512 ...passed 00:09:02.685 Test: blob_esnap_io_512_4096 ...passed 00:09:02.685 Suite: blob_bs_copy_extent 00:09:02.685 Test: blob_open ...passed 00:09:02.685 Test: blob_create ...[2024-04-17 12:53:06.657024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:02.685 passed 00:09:02.685 Test: blob_create_loop ...passed 00:09:02.685 Test: blob_create_fail ...[2024-04-17 12:53:06.769058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:02.685 passed 00:09:02.685 Test: blob_create_internal ...passed 00:09:02.944 Test: blob_create_zero_extent ...passed 00:09:02.944 Test: blob_snapshot ...passed 00:09:02.944 Test: blob_clone ...passed 00:09:02.944 Test: blob_inflate ...[2024-04-17 12:53:06.956698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:02.944 passed 00:09:02.944 Test: blob_delete ...passed 00:09:02.944 Test: blob_resize_test ...[2024-04-17 12:53:07.032034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:02.944 passed 00:09:02.944 Test: channel_ops ...passed 00:09:03.202 Test: blob_super ...passed 00:09:03.202 Test: blob_rw_verify_iov ...passed 00:09:03.202 Test: blob_unmap ...passed 00:09:03.202 Test: blob_iter ...passed 00:09:03.202 Test: blob_parse_md ...passed 00:09:03.202 Test: bs_load_pending_removal ...passed 00:09:03.203 Test: bs_unload ...[2024-04-17 12:53:07.341272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:03.463 passed 00:09:03.463 Test: bs_usable_clusters ...passed 00:09:03.463 Test: blob_crc ...[2024-04-17 12:53:07.425366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:03.463 [2024-04-17 12:53:07.425969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:03.463 passed 00:09:03.463 Test: blob_flags ...passed 00:09:03.463 Test: bs_version ...passed 00:09:03.463 Test: blob_set_xattrs_test ...[2024-04-17 12:53:07.553209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:03.463 [2024-04-17 12:53:07.553678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:03.463 passed 00:09:03.724 Test: blob_thin_prov_alloc ...passed 00:09:03.724 Test: blob_insert_cluster_msg_test ...passed 00:09:03.724 Test: blob_thin_prov_rw ...passed 00:09:03.724 Test: blob_thin_prov_rle ...passed 00:09:03.724 Test: blob_thin_prov_rw_iov ...passed 00:09:03.982 Test: blob_snapshot_rw ...passed 00:09:03.982 Test: blob_snapshot_rw_iov ...passed 00:09:04.241 Test: blob_inflate_rw ...passed 00:09:04.241 Test: blob_snapshot_freeze_io ...passed 00:09:04.499 Test: blob_operation_split_rw ...passed 00:09:04.499 Test: blob_operation_split_rw_iov ...passed 00:09:04.499 Test: blob_simultaneous_operations ...[2024-04-17 12:53:08.603641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:04.499 [2024-04-17 12:53:08.604005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.499 [2024-04-17 12:53:08.604573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:04.499 [2024-04-17 12:53:08.604716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.499 [2024-04-17 12:53:08.607234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:04.499 [2024-04-17 12:53:08.607377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.499 [2024-04-17 12:53:08.607513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:04.499 [2024-04-17 12:53:08.607727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.499 passed 00:09:04.757 Test: blob_persist_test ...passed 00:09:04.757 Test: blob_decouple_snapshot ...passed 00:09:04.757 Test: blob_seek_io_unit ...passed 00:09:04.757 Test: blob_nested_freezes ...passed 00:09:04.757 Suite: blob_blob_copy_extent 00:09:04.757 Test: blob_write ...passed 00:09:04.757 Test: blob_read ...passed 00:09:05.014 Test: blob_rw_verify ...passed 00:09:05.014 Test: blob_rw_verify_iov_nomem ...passed 00:09:05.014 Test: blob_rw_iov_read_only ...passed 00:09:05.014 Test: blob_xattr ...passed 00:09:05.014 Test: blob_dirty_shutdown ...passed 00:09:05.014 Test: blob_is_degraded ...passed 00:09:05.014 Suite: blob_esnap_bs_copy_extent 00:09:05.014 Test: blob_esnap_create ...passed 00:09:05.272 Test: blob_esnap_thread_add_remove ...passed 00:09:05.272 Test: blob_esnap_clone_snapshot ...passed 00:09:05.272 Test: blob_esnap_clone_inflate ...passed 00:09:05.272 Test: blob_esnap_clone_decouple ...passed 00:09:05.272 Test: blob_esnap_clone_reload ...passed 00:09:05.272 Test: blob_esnap_hotplug ...passed 00:09:05.272 00:09:05.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.272 suites 16 16 n/a 0 0 00:09:05.272 tests 352 352 352 0 0 00:09:05.272 asserts 93211 93211 93211 0 n/a 00:09:05.272 00:09:05.272 Elapsed time = 15.116 seconds 00:09:05.529 12:53:09 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:05.529 00:09:05.529 00:09:05.529 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.529 http://cunit.sourceforge.net/ 00:09:05.529 00:09:05.529 00:09:05.530 Suite: blob_bdev 00:09:05.530 Test: create_bs_dev ...passed 00:09:05.530 Test: create_bs_dev_ro ...[2024-04-17 12:53:09.455382] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:05.530 passed 00:09:05.530 Test: create_bs_dev_rw ...passed 00:09:05.530 Test: claim_bs_dev ...[2024-04-17 12:53:09.456334] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:05.530 passed 00:09:05.530 Test: claim_bs_dev_ro ...passed 00:09:05.530 Test: deferred_destroy_refs ...passed 00:09:05.530 Test: deferred_destroy_channels ...passed 00:09:05.530 Test: deferred_destroy_threads ...passed 00:09:05.530 00:09:05.530 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.530 suites 1 1 n/a 0 0 00:09:05.530 tests 8 8 8 0 0 00:09:05.530 asserts 119 119 119 0 n/a 00:09:05.530 00:09:05.530 Elapsed time = 0.001 seconds 00:09:05.530 12:53:09 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:05.530 00:09:05.530 00:09:05.530 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.530 http://cunit.sourceforge.net/ 00:09:05.530 00:09:05.530 00:09:05.530 Suite: tree 00:09:05.530 Test: blobfs_tree_op_test ...passed 00:09:05.530 00:09:05.530 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.530 suites 1 1 n/a 0 0 00:09:05.530 tests 1 1 1 0 0 00:09:05.530 asserts 27 27 27 0 n/a 00:09:05.530 00:09:05.530 Elapsed time = 0.000 seconds 00:09:05.530 12:53:09 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:05.530 00:09:05.530 00:09:05.530 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.530 http://cunit.sourceforge.net/ 00:09:05.530 00:09:05.530 00:09:05.530 Suite: blobfs_async_ut 00:09:05.530 Test: fs_init ...passed 00:09:05.530 Test: fs_open ...passed 00:09:05.530 Test: fs_create ...passed 00:09:05.530 Test: fs_truncate ...passed 00:09:05.530 Test: fs_rename ...[2024-04-17 12:53:09.648250] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:05.530 passed 00:09:05.530 Test: fs_rw_async ...passed 00:09:05.787 Test: fs_writev_readv_async ...passed 00:09:05.787 Test: tree_find_buffer_ut ...passed 00:09:05.787 Test: channel_ops ...passed 00:09:05.787 Test: channel_ops_sync ...passed 00:09:05.787 00:09:05.787 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.787 suites 1 1 n/a 0 0 00:09:05.787 tests 10 10 10 0 0 00:09:05.787 asserts 292 292 292 0 n/a 00:09:05.787 00:09:05.787 Elapsed time = 0.178 seconds 00:09:05.787 12:53:09 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:05.787 00:09:05.787 00:09:05.787 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.787 http://cunit.sourceforge.net/ 00:09:05.787 00:09:05.787 00:09:05.787 Suite: blobfs_sync_ut 00:09:05.787 Test: cache_read_after_write ...[2024-04-17 12:53:09.835288] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:05.787 passed 00:09:05.787 Test: file_length ...passed 00:09:05.787 Test: append_write_to_extend_blob ...passed 00:09:05.787 Test: partial_buffer ...passed 00:09:05.787 Test: cache_write_null_buffer ...passed 00:09:05.787 Test: fs_create_sync ...passed 00:09:06.046 Test: fs_rename_sync ...passed 00:09:06.046 Test: cache_append_no_cache ...passed 00:09:06.046 Test: fs_delete_file_without_close ...passed 00:09:06.046 00:09:06.046 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.046 suites 1 1 n/a 0 0 00:09:06.046 tests 9 9 9 0 0 00:09:06.046 asserts 345 345 345 0 n/a 00:09:06.046 00:09:06.046 Elapsed time = 0.376 seconds 00:09:06.046 12:53:10 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:06.046 00:09:06.046 00:09:06.046 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.046 http://cunit.sourceforge.net/ 00:09:06.046 00:09:06.046 00:09:06.046 Suite: blobfs_bdev_ut 00:09:06.046 Test: spdk_blobfs_bdev_detect_test ...[2024-04-17 12:53:10.037829] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:06.046 passed 00:09:06.046 Test: spdk_blobfs_bdev_create_test ...[2024-04-17 12:53:10.038420] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:06.046 passed 00:09:06.046 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:06.046 00:09:06.046 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.046 suites 1 1 n/a 0 0 00:09:06.046 tests 3 3 3 0 0 00:09:06.046 asserts 9 9 9 0 n/a 00:09:06.046 00:09:06.046 Elapsed time = 0.001 seconds 00:09:06.046 ************************************ 00:09:06.046 END TEST unittest_blob_blobfs 00:09:06.047 ************************************ 00:09:06.047 00:09:06.047 real 0m15.981s 00:09:06.047 user 0m15.272s 00:09:06.047 sys 0m0.763s 00:09:06.047 12:53:10 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:06.047 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.047 12:53:10 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:09:06.047 12:53:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:06.047 12:53:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:06.047 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.047 ************************************ 00:09:06.047 START TEST unittest_event 00:09:06.047 ************************************ 00:09:06.047 12:53:10 -- common/autotest_common.sh@1099 -- # unittest_event 00:09:06.047 12:53:10 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:06.047 00:09:06.047 00:09:06.047 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.047 http://cunit.sourceforge.net/ 00:09:06.047 00:09:06.047 00:09:06.047 Suite: app_suite 00:09:06.047 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:06.047 options:app_ut: invalid option -- 'z' 00:09:06.047 00:09:06.047 -c, --config JSON config file 00:09:06.047 --json JSON config file 00:09:06.047 --json-ignore-init-errors 00:09:06.047 don't exit on invalid config entry 00:09:06.047 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:06.047 -g, --single-file-segments 00:09:06.047 force creating just one hugetlbfs file 00:09:06.047 -h, --help show this usage 00:09:06.047 -i, --shm-id shared memory ID (optional) 00:09:06.047 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:06.047 --lcores lcore to CPU mapping list. The list is in the format: 00:09:06.047 [<,lcores[@CPUs]>...] 00:09:06.047 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:06.047 Within the group, '-' is used for range separator, 00:09:06.047 ',' is used for single number separator. 00:09:06.047 '( )' can be omitted for single element group, 00:09:06.047 '@' can be omitted if cpus and lcores have the same value 00:09:06.047 -n, --mem-channels channel number of memory channels used for DPDK 00:09:06.047 -p, --main-core main (primary) core for DPDK 00:09:06.047 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:06.047 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:06.047 --disable-cpumask-locks Disable CPU core lock files. 00:09:06.047 --silence-noticelog disable notice level logging to stderr 00:09:06.047 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:06.047 -u, --no-pci disable PCI access 00:09:06.047 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:06.047 --max-delay maximum reactor delay (in microseconds) 00:09:06.047 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:06.047 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:06.047 -R, --huge-unlink unlink huge files after initialization 00:09:06.047 -v, --version print SPDK version 00:09:06.047 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:06.047 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:06.047 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:06.047 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:06.047 Tracepoints vary in size and can use more than one trace entry. 00:09:06.047 --rpcs-allowed comma-separated list of permitted RPCS 00:09:06.047 --env-context Opaque context for use of the env implementation 00:09:06.047 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:06.047 --no-huge run without using hugepages 00:09:06.047 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:09:06.047 -e, --tpoint-group [:] 00:09:06.047 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:06.047 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:06.047 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:06.047 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:06.047 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:06.047 app_ut: unrecognized option '--test-long-opt' 00:09:06.047 app_ut [options] 00:09:06.047 options: 00:09:06.047 -c, --config JSON config file 00:09:06.047 --json JSON config file 00:09:06.047 --json-ignore-init-errors 00:09:06.047 don't exit on invalid config entry 00:09:06.047 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:06.047 -g, --single-file-segments 00:09:06.047 force creating just one hugetlbfs file 00:09:06.047 -h, --help show this usage 00:09:06.047 -i, --shm-id shared memory ID (optional) 00:09:06.047 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:06.047 --lcores lcore to CPU mapping list. The list is in the format: 00:09:06.047 [<,lcores[@CPUs]>...] 00:09:06.047 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:06.047 Within the group, '-' is used for range separator, 00:09:06.047 ',' is used for single number separator. 00:09:06.047 '( )' can be omitted for single element group, 00:09:06.047 '@' can be omitted if cpus and lcores have the same value 00:09:06.047 -n, --mem-channels channel number of memory channels used for DPDK 00:09:06.047 -p, --main-core main (primary) core for DPDK 00:09:06.047 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:06.047 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:06.047 --disable-cpumask-locks Disable CPU core lock files. 00:09:06.047 --silence-noticelog disable notice level logging to stderr 00:09:06.047 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:06.047 -u, --no-pci disable PCI access 00:09:06.047 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:06.047 --max-delay maximum reactor delay (in microseconds) 00:09:06.047 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:06.047 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:06.047 -R, --huge-unlink unlink huge files after initialization 00:09:06.047 -v, --version print SPDK version 00:09:06.047 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:06.047 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:06.047 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:06.047 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:06.047 Tracepoints vary in size and can use more than one trace entry. 00:09:06.047 --rpcs-allowed comma-separated list of permitted RPCS 00:09:06.047 --env-context Opaque context for use of the env implementation 00:09:06.047 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:06.047 --no-huge run without using hugepages 00:09:06.047 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:09:06.047 -e, --tpoint-group [:] 00:09:06.047 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:06.047 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:06.047 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:06.047 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:06.048 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:06.048 app_ut [options] 00:09:06.048 options: 00:09:06.048 -c, --config JSON config file 00:09:06.048 --json JSON config file 00:09:06.048 --json-ignore-init-errors 00:09:06.048 don't exit on invalid config entry 00:09:06.048 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:06.048 -g, --single-file-segments 00:09:06.048 force creating just one hugetlbfs file 00:09:06.048 -h, --help show this usage 00:09:06.048 -i, --shm-id shared memory ID (optional) 00:09:06.048 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:06.048 --lcores lcore to CPU mapping list. The list is in the format: 00:09:06.048 [<,lcores[@CPUs]>...] 00:09:06.048 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:06.048 Within the group, '-' is used for range separator, 00:09:06.048 ',' is used for single number separator. 00:09:06.048 '( )' can be omitted for single element group, 00:09:06.048 '@' can be omitted if cpus and lcores have the same value 00:09:06.048 -n, --mem-channels channel number of memory channels used for DPDK 00:09:06.048 -p, --main-core main (primary) core for DPDK 00:09:06.048 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:06.048 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:06.048 --disable-cpumask-locks Disable CPU core lock files. 00:09:06.048 --silence-noticelog disable notice level logging to stderr 00:09:06.048 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:06.048 -u, --no-pci disable PCI access 00:09:06.048 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:06.048 --max-delay maximum reactor delay (in microseconds) 00:09:06.048 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:06.048 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:06.048 -R, --huge-unlink unlink huge files after initialization 00:09:06.048 -v, --version print SPDK version 00:09:06.048 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:06.048 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:06.048 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:06.048 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:06.048 Tracepoints vary in size and can use more than one trace entry. 00:09:06.048 --rpcs-allowed comma-separated list of permitted RPCS 00:09:06.048 --env-context Opaque context for use of the env implementation 00:09:06.048 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:06.048 --no-huge run without using hugepages 00:09:06.048 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:09:06.048 -e, --tpoint-group [:] 00:09:06.048 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:09:06.048 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:06.048 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:09:06.048 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:06.048 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:06.048 passed 00:09:06.048 00:09:06.048 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.048 suites 1 1 n/a 0 0 00:09:06.048 tests 1 1 1 0 0 00:09:06.048 asserts 8 8 8 0 n/a 00:09:06.048 00:09:06.048 Elapsed time = 0.003 seconds 00:09:06.048 [2024-04-17 12:53:10.152906] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1077:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:06.048 [2024-04-17 12:53:10.153350] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1258:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:06.048 [2024-04-17 12:53:10.153623] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1163:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:06.048 12:53:10 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:06.048 00:09:06.048 00:09:06.048 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.048 http://cunit.sourceforge.net/ 00:09:06.048 00:09:06.048 00:09:06.048 Suite: app_suite 00:09:06.048 Test: test_create_reactor ...passed 00:09:06.048 Test: test_init_reactors ...passed 00:09:06.048 Test: test_event_call ...passed 00:09:06.048 Test: test_schedule_thread ...passed 00:09:06.048 Test: test_reschedule_thread ...passed 00:09:06.306 Test: test_bind_thread ...passed 00:09:06.306 Test: test_for_each_reactor ...passed 00:09:06.306 Test: test_reactor_stats ...passed 00:09:06.306 Test: test_scheduler ...passed 00:09:06.306 Test: test_governor ...passed 00:09:06.306 00:09:06.306 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.306 suites 1 1 n/a 0 0 00:09:06.306 tests 10 10 10 0 0 00:09:06.306 asserts 344 344 344 0 n/a 00:09:06.306 00:09:06.306 Elapsed time = 0.016 seconds 00:09:06.306 00:09:06.306 real 0m0.097s 00:09:06.306 user 0m0.064s 00:09:06.306 sys 0m0.024s 00:09:06.306 12:53:10 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:06.306 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.306 ************************************ 00:09:06.306 END TEST unittest_event 00:09:06.306 ************************************ 00:09:06.307 12:53:10 -- unit/unittest.sh@233 -- # uname -s 00:09:06.307 12:53:10 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:09:06.307 12:53:10 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:09:06.307 12:53:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:06.307 12:53:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:06.307 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.307 ************************************ 00:09:06.307 START TEST unittest_ftl 00:09:06.307 ************************************ 00:09:06.307 12:53:10 -- common/autotest_common.sh@1099 -- # unittest_ftl 00:09:06.307 12:53:10 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:06.307 00:09:06.307 00:09:06.307 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.307 http://cunit.sourceforge.net/ 00:09:06.307 00:09:06.307 00:09:06.307 Suite: ftl_band_suite 00:09:06.307 Test: test_band_block_offset_from_addr_base ...passed 00:09:06.307 Test: test_band_block_offset_from_addr_offset ...passed 00:09:06.307 Test: test_band_addr_from_block_offset ...passed 00:09:06.565 Test: test_band_set_addr ...passed 00:09:06.565 Test: test_invalidate_addr ...passed 00:09:06.565 Test: test_next_xfer_addr ...passed 00:09:06.565 00:09:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.565 suites 1 1 n/a 0 0 00:09:06.565 tests 6 6 6 0 0 00:09:06.565 asserts 30356 30356 30356 0 n/a 00:09:06.565 00:09:06.565 Elapsed time = 0.180 seconds 00:09:06.565 12:53:10 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:06.565 00:09:06.565 00:09:06.565 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.565 http://cunit.sourceforge.net/ 00:09:06.565 00:09:06.565 00:09:06.565 Suite: ftl_bitmap 00:09:06.565 Test: test_ftl_bitmap_create ...[2024-04-17 12:53:10.576796] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:06.565 [2024-04-17 12:53:10.577363] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:06.565 passed 00:09:06.565 Test: test_ftl_bitmap_get ...passed 00:09:06.565 Test: test_ftl_bitmap_set ...passed 00:09:06.565 Test: test_ftl_bitmap_clear ...passed 00:09:06.565 Test: test_ftl_bitmap_find_first_set ...passed 00:09:06.565 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:06.565 Test: test_ftl_bitmap_count_set ...passed 00:09:06.565 00:09:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.565 suites 1 1 n/a 0 0 00:09:06.565 tests 7 7 7 0 0 00:09:06.565 asserts 137 137 137 0 n/a 00:09:06.565 00:09:06.565 Elapsed time = 0.001 seconds 00:09:06.565 12:53:10 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:06.565 00:09:06.565 00:09:06.565 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.565 http://cunit.sourceforge.net/ 00:09:06.565 00:09:06.565 00:09:06.565 Suite: ftl_io_suite 00:09:06.565 Test: test_completion ...passed 00:09:06.565 Test: test_multiple_ios ...passed 00:09:06.565 00:09:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.565 suites 1 1 n/a 0 0 00:09:06.565 tests 2 2 2 0 0 00:09:06.565 asserts 47 47 47 0 n/a 00:09:06.565 00:09:06.565 Elapsed time = 0.002 seconds 00:09:06.565 12:53:10 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:06.565 00:09:06.565 00:09:06.565 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.565 http://cunit.sourceforge.net/ 00:09:06.565 00:09:06.565 00:09:06.565 Suite: ftl_mngt 00:09:06.565 Test: test_next_step ...passed 00:09:06.565 Test: test_continue_step ...passed 00:09:06.565 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:06.565 Test: test_fail_step ...passed 00:09:06.565 Test: test_mngt_call_and_call_rollback ...passed 00:09:06.565 Test: test_nested_process_failure ...passed 00:09:06.565 00:09:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.565 suites 1 1 n/a 0 0 00:09:06.565 tests 6 6 6 0 0 00:09:06.565 asserts 176 176 176 0 n/a 00:09:06.565 00:09:06.565 Elapsed time = 0.002 seconds 00:09:06.565 12:53:10 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:06.565 00:09:06.565 00:09:06.565 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.565 http://cunit.sourceforge.net/ 00:09:06.565 00:09:06.565 00:09:06.565 Suite: ftl_mempool 00:09:06.565 Test: test_ftl_mempool_create ...passed 00:09:06.565 Test: test_ftl_mempool_get_put ...passed 00:09:06.565 00:09:06.565 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.565 suites 1 1 n/a 0 0 00:09:06.565 tests 2 2 2 0 0 00:09:06.565 asserts 36 36 36 0 n/a 00:09:06.565 00:09:06.565 Elapsed time = 0.000 seconds 00:09:06.566 12:53:10 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:06.566 00:09:06.566 00:09:06.566 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.566 http://cunit.sourceforge.net/ 00:09:06.566 00:09:06.566 00:09:06.566 Suite: ftl_addr64_suite 00:09:06.566 Test: test_addr_cached ...passed 00:09:06.566 00:09:06.566 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.566 suites 1 1 n/a 0 0 00:09:06.566 tests 1 1 1 0 0 00:09:06.566 asserts 1536 1536 1536 0 n/a 00:09:06.566 00:09:06.566 Elapsed time = 0.000 seconds 00:09:06.825 12:53:10 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:06.825 00:09:06.825 00:09:06.825 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.825 http://cunit.sourceforge.net/ 00:09:06.825 00:09:06.825 00:09:06.825 Suite: ftl_sb 00:09:06.825 Test: test_sb_crc_v2 ...passed 00:09:06.825 Test: test_sb_crc_v3 ...passed 00:09:06.825 Test: test_sb_v3_md_layout ...[2024-04-17 12:53:10.736069] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:06.825 [2024-04-17 12:53:10.736546] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:06.825 [2024-04-17 12:53:10.736711] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:06.825 [2024-04-17 12:53:10.736780] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:06.825 [2024-04-17 12:53:10.736883] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:06.825 [2024-04-17 12:53:10.737073] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:06.825 [2024-04-17 12:53:10.737209] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:06.825 [2024-04-17 12:53:10.737292] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:06.825 [2024-04-17 12:53:10.737418] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:06.825 [2024-04-17 12:53:10.737574] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:06.825 [2024-04-17 12:53:10.737633] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:06.825 passed 00:09:06.825 Test: test_sb_v5_md_layout ...passed 00:09:06.825 00:09:06.825 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.825 suites 1 1 n/a 0 0 00:09:06.825 tests 4 4 4 0 0 00:09:06.825 asserts 148 148 148 0 n/a 00:09:06.825 00:09:06.825 Elapsed time = 0.002 seconds 00:09:06.825 12:53:10 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:06.825 00:09:06.825 00:09:06.825 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.825 http://cunit.sourceforge.net/ 00:09:06.825 00:09:06.825 00:09:06.825 Suite: ftl_layout_upgrade 00:09:06.825 Test: test_l2p_upgrade ...passed 00:09:06.825 00:09:06.825 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.825 suites 1 1 n/a 0 0 00:09:06.825 tests 1 1 1 0 0 00:09:06.825 asserts 140 140 140 0 n/a 00:09:06.825 00:09:06.825 Elapsed time = 0.001 seconds 00:09:06.825 00:09:06.825 real 0m0.488s 00:09:06.825 user 0m0.217s 00:09:06.825 sys 0m0.264s 00:09:06.825 12:53:10 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:06.825 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.825 ************************************ 00:09:06.825 END TEST unittest_ftl 00:09:06.825 ************************************ 00:09:06.825 12:53:10 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:06.825 12:53:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:06.825 12:53:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:06.825 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.825 ************************************ 00:09:06.825 START TEST unittest_accel 00:09:06.825 ************************************ 00:09:06.825 12:53:10 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:06.825 00:09:06.825 00:09:06.825 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.825 http://cunit.sourceforge.net/ 00:09:06.825 00:09:06.825 00:09:06.825 Suite: accel_sequence 00:09:06.825 Test: test_sequence_fill_copy ...passed 00:09:06.825 Test: test_sequence_abort ...passed 00:09:06.825 Test: test_sequence_append_error ...passed 00:09:06.825 Test: test_sequence_completion_error ...[2024-04-17 12:53:10.899031] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1962:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f94e8ac27c0 00:09:06.825 [2024-04-17 12:53:10.899563] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1962:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f94e8ac27c0 00:09:06.825 [2024-04-17 12:53:10.899747] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1872:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f94e8ac27c0 00:09:06.825 [2024-04-17 12:53:10.900000] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1872:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f94e8ac27c0 00:09:06.825 passed 00:09:06.825 Test: test_sequence_decompress ...passed 00:09:06.825 Test: test_sequence_reverse ...passed 00:09:06.825 Test: test_sequence_copy_elision ...passed 00:09:06.825 Test: test_sequence_accel_buffers ...passed 00:09:06.825 Test: test_sequence_memory_domain ...[2024-04-17 12:53:10.913065] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1764:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:06.825 [2024-04-17 12:53:10.913393] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1803:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:06.825 passed 00:09:06.825 Test: test_sequence_module_memory_domain ...passed 00:09:06.825 Test: test_sequence_crypto ...passed 00:09:06.825 Test: test_sequence_driver ...[2024-04-17 12:53:10.921078] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1911:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f94e7e7a7c0 using driver: ut 00:09:06.825 [2024-04-17 12:53:10.921327] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1975:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f94e7e7a7c0 through driver: ut 00:09:06.825 passed 00:09:06.825 Test: test_sequence_same_iovs ...passed 00:09:06.825 Test: test_sequence_crc32 ...passed 00:09:06.825 Suite: accel 00:09:06.825 Test: test_spdk_accel_task_complete ...passed 00:09:06.825 Test: test_get_task ...passed 00:09:06.825 Test: test_spdk_accel_submit_copy ...passed 00:09:06.825 Test: test_spdk_accel_submit_dualcast ...[2024-04-17 12:53:10.927714] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:06.825 [2024-04-17 12:53:10.927981] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:06.825 passed 00:09:06.825 Test: test_spdk_accel_submit_compare ...passed 00:09:06.825 Test: test_spdk_accel_submit_fill ...passed 00:09:06.825 Test: test_spdk_accel_submit_crc32c ...passed 00:09:06.826 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:06.826 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:06.826 Test: test_spdk_accel_submit_xor ...passed 00:09:06.826 Test: test_spdk_accel_module_find_by_name ...passed 00:09:06.826 Test: test_spdk_accel_module_register ...passed 00:09:06.826 00:09:06.826 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.826 suites 2 2 n/a 0 0 00:09:06.826 tests 26 26 26 0 0 00:09:06.826 asserts 833 833 833 0 n/a 00:09:06.826 00:09:06.826 Elapsed time = 0.037 seconds 00:09:06.826 00:09:06.826 real 0m0.086s 00:09:06.826 user 0m0.049s 00:09:06.826 sys 0m0.031s 00:09:06.826 12:53:10 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:06.826 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:06.826 ************************************ 00:09:06.826 END TEST unittest_accel 00:09:06.826 ************************************ 00:09:07.085 12:53:10 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:07.085 12:53:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.085 12:53:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.085 12:53:10 -- common/autotest_common.sh@10 -- # set +x 00:09:07.085 ************************************ 00:09:07.085 START TEST unittest_ioat 00:09:07.085 ************************************ 00:09:07.085 12:53:11 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:07.085 00:09:07.085 00:09:07.085 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.085 http://cunit.sourceforge.net/ 00:09:07.085 00:09:07.085 00:09:07.085 Suite: ioat 00:09:07.085 Test: ioat_state_check ...passed 00:09:07.085 00:09:07.085 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.085 suites 1 1 n/a 0 0 00:09:07.085 tests 1 1 1 0 0 00:09:07.085 asserts 32 32 32 0 n/a 00:09:07.085 00:09:07.085 Elapsed time = 0.000 seconds 00:09:07.085 00:09:07.085 real 0m0.028s 00:09:07.085 user 0m0.013s 00:09:07.085 sys 0m0.015s 00:09:07.085 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.085 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.085 ************************************ 00:09:07.085 END TEST unittest_ioat 00:09:07.085 ************************************ 00:09:07.085 12:53:11 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:07.085 12:53:11 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:07.085 12:53:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.085 12:53:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.085 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.085 ************************************ 00:09:07.086 START TEST unittest_idxd_user 00:09:07.086 ************************************ 00:09:07.086 12:53:11 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:07.086 00:09:07.086 00:09:07.086 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.086 http://cunit.sourceforge.net/ 00:09:07.086 00:09:07.086 00:09:07.086 Suite: idxd_user 00:09:07.086 Test: test_idxd_wait_cmd ...[2024-04-17 12:53:11.168679] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:07.086 [2024-04-17 12:53:11.169185] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:07.086 passed 00:09:07.086 Test: test_idxd_reset_dev ...[2024-04-17 12:53:11.169688] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:07.086 [2024-04-17 12:53:11.169879] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:07.086 passed 00:09:07.086 Test: test_idxd_group_config ...passed 00:09:07.086 Test: test_idxd_wq_config ...passed 00:09:07.086 00:09:07.086 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.086 suites 1 1 n/a 0 0 00:09:07.086 tests 4 4 4 0 0 00:09:07.086 asserts 20 20 20 0 n/a 00:09:07.086 00:09:07.086 Elapsed time = 0.001 seconds 00:09:07.086 00:09:07.086 real 0m0.033s 00:09:07.086 user 0m0.019s 00:09:07.086 sys 0m0.013s 00:09:07.086 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.086 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.086 ************************************ 00:09:07.086 END TEST unittest_idxd_user 00:09:07.086 ************************************ 00:09:07.086 12:53:11 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:09:07.086 12:53:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.086 12:53:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.086 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.345 ************************************ 00:09:07.345 START TEST unittest_iscsi 00:09:07.345 ************************************ 00:09:07.345 12:53:11 -- common/autotest_common.sh@1099 -- # unittest_iscsi 00:09:07.345 12:53:11 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:07.345 00:09:07.345 00:09:07.345 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.345 http://cunit.sourceforge.net/ 00:09:07.345 00:09:07.345 00:09:07.345 Suite: conn_suite 00:09:07.345 Test: read_task_split_in_order_case ...passed 00:09:07.345 Test: read_task_split_reverse_order_case ...passed 00:09:07.345 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:07.345 Test: process_non_read_task_completion_test ...passed 00:09:07.345 Test: free_tasks_on_connection ...passed 00:09:07.345 Test: free_tasks_with_queued_datain ...passed 00:09:07.345 Test: abort_queued_datain_task_test ...passed 00:09:07.345 Test: abort_queued_datain_tasks_test ...passed 00:09:07.345 00:09:07.345 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.345 suites 1 1 n/a 0 0 00:09:07.345 tests 8 8 8 0 0 00:09:07.345 asserts 230 230 230 0 n/a 00:09:07.345 00:09:07.345 Elapsed time = 0.000 seconds 00:09:07.345 12:53:11 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:07.345 00:09:07.345 00:09:07.345 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.346 http://cunit.sourceforge.net/ 00:09:07.346 00:09:07.346 00:09:07.346 Suite: iscsi_suite 00:09:07.346 Test: param_negotiation_test ...passed 00:09:07.346 Test: list_negotiation_test ...passed 00:09:07.346 Test: parse_valid_test ...passed 00:09:07.346 Test: parse_invalid_test ...[2024-04-17 12:53:11.321732] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:07.346 [2024-04-17 12:53:11.322162] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:07.346 [2024-04-17 12:53:11.322344] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:09:07.346 [2024-04-17 12:53:11.322552] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:07.346 [2024-04-17 12:53:11.322800] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:07.346 [2024-04-17 12:53:11.322954] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:07.346 [2024-04-17 12:53:11.323210] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:07.346 passed 00:09:07.346 00:09:07.346 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.346 suites 1 1 n/a 0 0 00:09:07.346 tests 4 4 4 0 0 00:09:07.346 asserts 161 161 161 0 n/a 00:09:07.346 00:09:07.346 Elapsed time = 0.005 seconds 00:09:07.346 12:53:11 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:07.346 00:09:07.346 00:09:07.346 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.346 http://cunit.sourceforge.net/ 00:09:07.346 00:09:07.346 00:09:07.346 Suite: iscsi_target_node_suite 00:09:07.346 Test: add_lun_test_cases ...[2024-04-17 12:53:11.351790] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:07.346 [2024-04-17 12:53:11.352294] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:07.346 [2024-04-17 12:53:11.352522] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:07.346 [2024-04-17 12:53:11.352715] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:07.346 [2024-04-17 12:53:11.352863] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:07.346 passed 00:09:07.346 Test: allow_any_allowed ...passed 00:09:07.346 Test: allow_ipv6_allowed ...passed 00:09:07.346 Test: allow_ipv6_denied ...passed 00:09:07.346 Test: allow_ipv6_invalid ...passed 00:09:07.346 Test: allow_ipv4_allowed ...passed 00:09:07.346 Test: allow_ipv4_denied ...passed 00:09:07.346 Test: allow_ipv4_invalid ...passed 00:09:07.346 Test: node_access_allowed ...passed 00:09:07.346 Test: node_access_denied_by_empty_netmask ...passed 00:09:07.346 Test: node_access_multi_initiator_groups_cases ...passed 00:09:07.346 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:07.346 Test: chap_param_test_cases ...[2024-04-17 12:53:11.355627] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:07.346 [2024-04-17 12:53:11.355839] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:07.346 [2024-04-17 12:53:11.356015] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:07.346 [2024-04-17 12:53:11.356181] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:07.346 [2024-04-17 12:53:11.356352] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:07.346 passed 00:09:07.346 00:09:07.346 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.346 suites 1 1 n/a 0 0 00:09:07.346 tests 13 13 13 0 0 00:09:07.346 asserts 50 50 50 0 n/a 00:09:07.346 00:09:07.346 Elapsed time = 0.002 seconds 00:09:07.346 12:53:11 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:07.346 00:09:07.346 00:09:07.346 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.346 http://cunit.sourceforge.net/ 00:09:07.346 00:09:07.346 00:09:07.346 Suite: iscsi_suite 00:09:07.346 Test: op_login_check_target_test ...[2024-04-17 12:53:11.395215] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:09:07.346 passed 00:09:07.346 Test: op_login_session_normal_test ...[2024-04-17 12:53:11.395868] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:07.346 [2024-04-17 12:53:11.396008] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:07.346 [2024-04-17 12:53:11.396073] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:07.346 [2024-04-17 12:53:11.396197] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:07.346 [2024-04-17 12:53:11.396376] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:07.346 [2024-04-17 12:53:11.396573] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:07.346 [2024-04-17 12:53:11.396781] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:07.346 passed 00:09:07.346 Test: maxburstlength_test ...[2024-04-17 12:53:11.397317] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:07.346 [2024-04-17 12:53:11.397485] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:07.346 passed 00:09:07.346 Test: underflow_for_read_transfer_test ...passed 00:09:07.346 Test: underflow_for_zero_read_transfer_test ...passed 00:09:07.346 Test: underflow_for_request_sense_test ...passed 00:09:07.346 Test: underflow_for_check_condition_test ...passed 00:09:07.346 Test: add_transfer_task_test ...passed 00:09:07.346 Test: get_transfer_task_test ...passed 00:09:07.346 Test: del_transfer_task_test ...passed 00:09:07.346 Test: clear_all_transfer_tasks_test ...passed 00:09:07.346 Test: build_iovs_test ...passed 00:09:07.346 Test: build_iovs_with_md_test ...passed 00:09:07.346 Test: pdu_hdr_op_login_test ...[2024-04-17 12:53:11.400753] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:07.346 [2024-04-17 12:53:11.400964] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:07.346 [2024-04-17 12:53:11.401157] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:07.346 passed 00:09:07.346 Test: pdu_hdr_op_text_test ...[2024-04-17 12:53:11.401562] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:07.346 [2024-04-17 12:53:11.401768] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:07.346 [2024-04-17 12:53:11.401911] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:07.346 passed 00:09:07.346 Test: pdu_hdr_op_logout_test ...[2024-04-17 12:53:11.402230] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:07.346 passed 00:09:07.346 Test: pdu_hdr_op_scsi_test ...[2024-04-17 12:53:11.402626] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:07.346 [2024-04-17 12:53:11.402701] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:07.346 [2024-04-17 12:53:11.402853] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:07.346 [2024-04-17 12:53:11.403052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:07.346 [2024-04-17 12:53:11.403248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:07.347 [2024-04-17 12:53:11.403523] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:07.347 passed 00:09:07.347 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-17 12:53:11.403917] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:07.347 [2024-04-17 12:53:11.404079] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:07.347 passed 00:09:07.347 Test: pdu_hdr_op_nopout_test ...[2024-04-17 12:53:11.404505] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:07.347 [2024-04-17 12:53:11.404738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:07.347 [2024-04-17 12:53:11.404877] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:07.347 [2024-04-17 12:53:11.405029] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:07.347 passed 00:09:07.347 Test: pdu_hdr_op_data_test ...[2024-04-17 12:53:11.405380] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:07.347 [2024-04-17 12:53:11.405555] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:07.347 [2024-04-17 12:53:11.405729] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:07.347 [2024-04-17 12:53:11.405881] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:07.347 [2024-04-17 12:53:11.406067] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:07.347 [2024-04-17 12:53:11.406248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:07.347 [2024-04-17 12:53:11.406401] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:07.347 passed 00:09:07.347 Test: empty_text_with_cbit_test ...passed 00:09:07.347 Test: pdu_payload_read_test ...[2024-04-17 12:53:11.408968] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:07.347 passed 00:09:07.347 Test: data_out_pdu_sequence_test ...passed 00:09:07.347 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:07.347 00:09:07.347 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.347 suites 1 1 n/a 0 0 00:09:07.347 tests 24 24 24 0 0 00:09:07.347 asserts 150253 150253 150253 0 n/a 00:09:07.347 00:09:07.347 Elapsed time = 0.018 seconds 00:09:07.347 12:53:11 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:07.347 00:09:07.347 00:09:07.347 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.347 http://cunit.sourceforge.net/ 00:09:07.347 00:09:07.347 00:09:07.347 Suite: init_grp_suite 00:09:07.347 Test: create_initiator_group_success_case ...passed 00:09:07.347 Test: find_initiator_group_success_case ...passed 00:09:07.347 Test: register_initiator_group_twice_case ...passed 00:09:07.347 Test: add_initiator_name_success_case ...passed 00:09:07.347 Test: add_initiator_name_fail_case ...[2024-04-17 12:53:11.454669] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:07.347 passed 00:09:07.347 Test: delete_all_initiator_names_success_case ...passed 00:09:07.347 Test: add_netmask_success_case ...passed 00:09:07.347 Test: add_netmask_fail_case ...[2024-04-17 12:53:11.456052] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:07.347 passed 00:09:07.347 Test: delete_all_netmasks_success_case ...passed 00:09:07.347 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:07.347 Test: netmask_overwrite_all_to_any_case ...passed 00:09:07.347 Test: add_delete_initiator_names_case ...passed 00:09:07.347 Test: add_duplicated_initiator_names_case ...passed 00:09:07.347 Test: delete_nonexisting_initiator_names_case ...passed 00:09:07.347 Test: add_delete_netmasks_case ...passed 00:09:07.347 Test: add_duplicated_netmasks_case ...passed 00:09:07.347 Test: delete_nonexisting_netmasks_case ...passed 00:09:07.347 00:09:07.347 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.347 suites 1 1 n/a 0 0 00:09:07.347 tests 17 17 17 0 0 00:09:07.347 asserts 108 108 108 0 n/a 00:09:07.347 00:09:07.347 Elapsed time = 0.002 seconds 00:09:07.347 12:53:11 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:07.606 00:09:07.606 00:09:07.606 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.606 http://cunit.sourceforge.net/ 00:09:07.606 00:09:07.606 00:09:07.606 Suite: portal_grp_suite 00:09:07.606 Test: portal_create_ipv4_normal_case ...passed 00:09:07.606 Test: portal_create_ipv6_normal_case ...passed 00:09:07.606 Test: portal_create_ipv4_wildcard_case ...passed 00:09:07.606 Test: portal_create_ipv6_wildcard_case ...passed 00:09:07.606 Test: portal_create_twice_case ...[2024-04-17 12:53:11.498601] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:07.606 passed 00:09:07.606 Test: portal_grp_register_unregister_case ...passed 00:09:07.606 Test: portal_grp_register_twice_case ...passed 00:09:07.606 Test: portal_grp_add_delete_case ...passed 00:09:07.606 Test: portal_grp_add_delete_twice_case ...passed 00:09:07.606 00:09:07.606 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.606 suites 1 1 n/a 0 0 00:09:07.606 tests 9 9 9 0 0 00:09:07.606 asserts 44 44 44 0 n/a 00:09:07.606 00:09:07.606 Elapsed time = 0.005 seconds 00:09:07.606 ************************************ 00:09:07.606 END TEST unittest_iscsi 00:09:07.606 ************************************ 00:09:07.606 00:09:07.606 real 0m0.259s 00:09:07.606 user 0m0.131s 00:09:07.606 sys 0m0.111s 00:09:07.606 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.606 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.606 12:53:11 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:09:07.606 12:53:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.606 12:53:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.606 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.606 ************************************ 00:09:07.606 START TEST unittest_json 00:09:07.606 ************************************ 00:09:07.606 12:53:11 -- common/autotest_common.sh@1099 -- # unittest_json 00:09:07.606 12:53:11 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:07.606 00:09:07.606 00:09:07.606 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.606 http://cunit.sourceforge.net/ 00:09:07.606 00:09:07.606 00:09:07.606 Suite: json 00:09:07.606 Test: test_parse_literal ...passed 00:09:07.606 Test: test_parse_string_simple ...passed 00:09:07.606 Test: test_parse_string_control_chars ...passed 00:09:07.606 Test: test_parse_string_utf8 ...passed 00:09:07.606 Test: test_parse_string_escapes_twochar ...passed 00:09:07.606 Test: test_parse_string_escapes_unicode ...passed 00:09:07.606 Test: test_parse_number ...passed 00:09:07.606 Test: test_parse_array ...passed 00:09:07.606 Test: test_parse_object ...passed 00:09:07.606 Test: test_parse_nesting ...passed 00:09:07.606 Test: test_parse_comment ...passed 00:09:07.606 00:09:07.606 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.606 suites 1 1 n/a 0 0 00:09:07.606 tests 11 11 11 0 0 00:09:07.606 asserts 1516 1516 1516 0 n/a 00:09:07.606 00:09:07.606 Elapsed time = 0.002 seconds 00:09:07.606 12:53:11 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:07.606 00:09:07.606 00:09:07.606 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.606 http://cunit.sourceforge.net/ 00:09:07.606 00:09:07.606 00:09:07.606 Suite: json 00:09:07.606 Test: test_strequal ...passed 00:09:07.606 Test: test_num_to_uint16 ...passed 00:09:07.606 Test: test_num_to_int32 ...passed 00:09:07.606 Test: test_num_to_uint64 ...passed 00:09:07.606 Test: test_decode_object ...passed 00:09:07.606 Test: test_decode_array ...passed 00:09:07.606 Test: test_decode_bool ...passed 00:09:07.606 Test: test_decode_uint16 ...passed 00:09:07.606 Test: test_decode_int32 ...passed 00:09:07.606 Test: test_decode_uint32 ...passed 00:09:07.606 Test: test_decode_uint64 ...passed 00:09:07.606 Test: test_decode_string ...passed 00:09:07.606 Test: test_decode_uuid ...passed 00:09:07.606 Test: test_find ...passed 00:09:07.606 Test: test_find_array ...passed 00:09:07.606 Test: test_iterating ...passed 00:09:07.606 Test: test_free_object ...passed 00:09:07.606 00:09:07.606 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.606 suites 1 1 n/a 0 0 00:09:07.606 tests 17 17 17 0 0 00:09:07.606 asserts 236 236 236 0 n/a 00:09:07.606 00:09:07.606 Elapsed time = 0.001 seconds 00:09:07.606 12:53:11 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:07.606 00:09:07.606 00:09:07.606 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.606 http://cunit.sourceforge.net/ 00:09:07.606 00:09:07.606 00:09:07.606 Suite: json 00:09:07.606 Test: test_write_literal ...passed 00:09:07.606 Test: test_write_string_simple ...passed 00:09:07.606 Test: test_write_string_escapes ...passed 00:09:07.606 Test: test_write_string_utf16le ...passed 00:09:07.606 Test: test_write_number_int32 ...passed 00:09:07.606 Test: test_write_number_uint32 ...passed 00:09:07.606 Test: test_write_number_uint128 ...passed 00:09:07.606 Test: test_write_string_number_uint128 ...passed 00:09:07.606 Test: test_write_number_int64 ...passed 00:09:07.606 Test: test_write_number_uint64 ...passed 00:09:07.606 Test: test_write_number_double ...passed 00:09:07.606 Test: test_write_uuid ...passed 00:09:07.606 Test: test_write_array ...passed 00:09:07.606 Test: test_write_object ...passed 00:09:07.606 Test: test_write_nesting ...passed 00:09:07.606 Test: test_write_val ...passed 00:09:07.606 00:09:07.606 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.606 suites 1 1 n/a 0 0 00:09:07.606 tests 16 16 16 0 0 00:09:07.606 asserts 918 918 918 0 n/a 00:09:07.606 00:09:07.606 Elapsed time = 0.006 seconds 00:09:07.606 12:53:11 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:07.606 00:09:07.607 00:09:07.607 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.607 http://cunit.sourceforge.net/ 00:09:07.607 00:09:07.607 00:09:07.607 Suite: jsonrpc 00:09:07.607 Test: test_parse_request ...passed 00:09:07.607 Test: test_parse_request_streaming ...passed 00:09:07.607 00:09:07.607 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.607 suites 1 1 n/a 0 0 00:09:07.607 tests 2 2 2 0 0 00:09:07.607 asserts 289 289 289 0 n/a 00:09:07.607 00:09:07.607 Elapsed time = 0.003 seconds 00:09:07.607 00:09:07.607 real 0m0.143s 00:09:07.607 user 0m0.086s 00:09:07.607 sys 0m0.048s 00:09:07.607 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.607 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.607 ************************************ 00:09:07.607 END TEST unittest_json 00:09:07.607 ************************************ 00:09:07.866 12:53:11 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:09:07.866 12:53:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.866 12:53:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.866 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.866 ************************************ 00:09:07.866 START TEST unittest_rpc 00:09:07.866 ************************************ 00:09:07.866 12:53:11 -- common/autotest_common.sh@1099 -- # unittest_rpc 00:09:07.866 12:53:11 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:07.866 00:09:07.866 00:09:07.866 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.866 http://cunit.sourceforge.net/ 00:09:07.866 00:09:07.866 00:09:07.866 Suite: rpc 00:09:07.866 Test: test_jsonrpc_handler ...passed 00:09:07.866 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:07.866 Test: test_rpc_get_methods ...[2024-04-17 12:53:11.841634] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:07.866 passed 00:09:07.866 Test: test_rpc_spdk_get_version ...passed 00:09:07.866 Test: test_spdk_rpc_listen_close ...passed 00:09:07.866 Test: test_rpc_run_multiple_servers ...passed 00:09:07.866 00:09:07.866 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.866 suites 1 1 n/a 0 0 00:09:07.866 tests 6 6 6 0 0 00:09:07.866 asserts 23 23 23 0 n/a 00:09:07.866 00:09:07.866 Elapsed time = 0.001 seconds 00:09:07.866 00:09:07.866 real 0m0.035s 00:09:07.866 user 0m0.022s 00:09:07.866 sys 0m0.011s 00:09:07.866 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.866 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.866 ************************************ 00:09:07.866 END TEST unittest_rpc 00:09:07.866 ************************************ 00:09:07.866 12:53:11 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:07.866 12:53:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:07.866 12:53:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:07.866 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:07.866 ************************************ 00:09:07.866 START TEST unittest_notify 00:09:07.866 ************************************ 00:09:07.866 12:53:11 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:07.866 00:09:07.866 00:09:07.866 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.866 http://cunit.sourceforge.net/ 00:09:07.866 00:09:07.866 00:09:07.866 Suite: app_suite 00:09:07.866 Test: notify ...passed 00:09:07.866 00:09:07.866 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.866 suites 1 1 n/a 0 0 00:09:07.866 tests 1 1 1 0 0 00:09:07.866 asserts 13 13 13 0 n/a 00:09:07.866 00:09:07.866 Elapsed time = 0.000 seconds 00:09:07.866 ************************************ 00:09:07.866 END TEST unittest_notify 00:09:07.866 ************************************ 00:09:07.866 00:09:07.866 real 0m0.031s 00:09:07.866 user 0m0.007s 00:09:07.866 sys 0m0.022s 00:09:07.866 12:53:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:07.866 12:53:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.125 12:53:12 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:09:08.125 12:53:12 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:08.126 12:53:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:08.126 12:53:12 -- common/autotest_common.sh@10 -- # set +x 00:09:08.126 ************************************ 00:09:08.126 START TEST unittest_nvme 00:09:08.126 ************************************ 00:09:08.126 12:53:12 -- common/autotest_common.sh@1099 -- # unittest_nvme 00:09:08.126 12:53:12 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:08.126 00:09:08.126 00:09:08.126 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.126 http://cunit.sourceforge.net/ 00:09:08.126 00:09:08.126 00:09:08.126 Suite: nvme 00:09:08.126 Test: test_opc_data_transfer ...passed 00:09:08.126 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:08.126 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:08.126 Test: test_trid_parse_and_compare ...[2024-04-17 12:53:12.076832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1171:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:08.126 [2024-04-17 12:53:12.077680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:08.126 [2024-04-17 12:53:12.078054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1183:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:08.126 [2024-04-17 12:53:12.078349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:08.126 [2024-04-17 12:53:12.078630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1194:parse_next_key: *ERROR*: Key without value 00:09:08.126 [2024-04-17 12:53:12.078952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:08.126 passed 00:09:08.126 Test: test_trid_trtype_str ...passed 00:09:08.126 Test: test_trid_adrfam_str ...passed 00:09:08.126 Test: test_nvme_ctrlr_probe ...[2024-04-17 12:53:12.080131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:08.126 passed 00:09:08.126 Test: test_spdk_nvme_probe ...[2024-04-17 12:53:12.080711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:08.126 [2024-04-17 12:53:12.080962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:08.126 [2024-04-17 12:53:12.081316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:08.126 [2024-04-17 12:53:12.081597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:08.126 passed 00:09:08.126 Test: test_spdk_nvme_connect ...[2024-04-17 12:53:12.082143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 993:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:08.126 [2024-04-17 12:53:12.082716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:08.126 [2024-04-17 12:53:12.083038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1004:spdk_nvme_connect: *ERROR*: Create probe context failed 00:09:08.126 passed 00:09:08.126 Test: test_nvme_ctrlr_probe_internal ...[2024-04-17 12:53:12.083614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:08.126 [2024-04-17 12:53:12.083910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:08.126 passed 00:09:08.126 Test: test_nvme_init_controllers ...[2024-04-17 12:53:12.084358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:08.126 passed 00:09:08.126 Test: test_nvme_driver_init ...[2024-04-17 12:53:12.084892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:08.126 [2024-04-17 12:53:12.085119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:08.126 [2024-04-17 12:53:12.198690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:08.126 [2024-04-17 12:53:12.199588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:09:08.126 passed 00:09:08.126 Test: test_spdk_nvme_detach ...passed 00:09:08.126 Test: test_nvme_completion_poll_cb ...passed 00:09:08.126 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:08.126 Test: test_nvme_allocate_request_null ...passed 00:09:08.126 Test: test_nvme_allocate_request ...passed 00:09:08.126 Test: test_nvme_free_request ...passed 00:09:08.126 Test: test_nvme_allocate_request_user_copy ...passed 00:09:08.126 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:08.126 Test: test_nvme_request_check_timeout ...passed 00:09:08.126 Test: test_nvme_wait_for_completion ...passed 00:09:08.126 Test: test_spdk_nvme_parse_func ...passed 00:09:08.126 Test: test_spdk_nvme_detach_async ...passed 00:09:08.126 Test: test_nvme_parse_addr ...[2024-04-17 12:53:12.204524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1581:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:08.126 passed 00:09:08.126 00:09:08.126 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.126 suites 1 1 n/a 0 0 00:09:08.126 tests 25 25 25 0 0 00:09:08.126 asserts 326 326 326 0 n/a 00:09:08.126 00:09:08.126 Elapsed time = 0.008 seconds 00:09:08.126 12:53:12 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:08.126 00:09:08.126 00:09:08.126 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.126 http://cunit.sourceforge.net/ 00:09:08.126 00:09:08.126 00:09:08.126 Suite: nvme_ctrlr 00:09:08.126 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-17 12:53:12.236916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-17 12:53:12.239116] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-17 12:53:12.240752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-17 12:53:12.242382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-17 12:53:12.244049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 [2024-04-17 12:53:12.245357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-17 12:53:12.246731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-17 12:53:12.248011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-17 12:53:12.250865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 [2024-04-17 12:53:12.253311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-17 12:53:12.254770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:08.126 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-17 12:53:12.257709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 [2024-04-17 12:53:12.259133] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-17 12:53:12.261749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:08.126 Test: test_nvme_ctrlr_init_delay ...[2024-04-17 12:53:12.264823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 passed 00:09:08.126 Test: test_alloc_io_qpair_rr_1 ...[2024-04-17 12:53:12.266700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.126 [2024-04-17 12:53:12.267048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:08.126 [2024-04-17 12:53:12.267460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:08.126 [2024-04-17 12:53:12.267698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:08.126 [2024-04-17 12:53:12.267941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:08.385 passed 00:09:08.385 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:09:08.385 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:08.385 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-17 12:53:12.268931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.385 passed 00:09:08.385 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-17 12:53:12.269577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.385 [2024-04-17 12:53:12.269921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:08.385 passed 00:09:08.385 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-17 12:53:12.270662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4857:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:08.385 [2024-04-17 12:53:12.271019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:08.385 [2024-04-17 12:53:12.271339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4934:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:08.385 [2024-04-17 12:53:12.271562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:08.385 passed 00:09:08.385 Test: test_nvme_ctrlr_fail ...[2024-04-17 12:53:12.272070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:08.385 passed 00:09:08.385 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:08.385 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:08.385 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:08.385 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-17 12:53:12.273692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:08.644 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:08.644 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:08.644 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-17 12:53:12.616107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-17 12:53:12.623871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-17 12:53:12.625503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 [2024-04-17 12:53:12.625674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2882:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:08.644 passed 00:09:08.644 Test: test_alloc_io_qpair_fail ...[2024-04-17 12:53:12.627184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 [2024-04-17 12:53:12.627397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 510:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:08.644 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:08.644 Test: test_nvme_ctrlr_set_state ...[2024-04-17 12:53:12.628143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-17 12:53:12.628472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-17 12:53:12.655496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-17 12:53:12.701527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_reset ...[2024-04-17 12:53:12.703716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_aer_callback ...[2024-04-17 12:53:12.704588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-17 12:53:12.706494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:08.644 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:08.644 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-17 12:53:12.709138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:08.644 Test: test_nvme_ctrlr_ana_resize ...[2024-04-17 12:53:12.711178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:08.644 Test: test_nvme_transport_ctrlr_ready ...[2024-04-17 12:53:12.713338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:08.644 [2024-04-17 12:53:12.713626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4079:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:09:08.644 passed 00:09:08.644 Test: test_nvme_ctrlr_disable ...[2024-04-17 12:53:12.714099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:08.645 passed 00:09:08.645 00:09:08.645 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.645 suites 1 1 n/a 0 0 00:09:08.645 tests 43 43 43 0 0 00:09:08.645 asserts 10418 10418 10418 0 n/a 00:09:08.645 00:09:08.645 Elapsed time = 0.422 seconds 00:09:08.645 12:53:12 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:08.645 00:09:08.645 00:09:08.645 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.645 http://cunit.sourceforge.net/ 00:09:08.645 00:09:08.645 00:09:08.645 Suite: nvme_ctrlr_cmd 00:09:08.645 Test: test_get_log_pages ...passed 00:09:08.645 Test: test_set_feature_cmd ...passed 00:09:08.645 Test: test_set_feature_ns_cmd ...passed 00:09:08.645 Test: test_get_feature_cmd ...passed 00:09:08.645 Test: test_get_feature_ns_cmd ...passed 00:09:08.645 Test: test_abort_cmd ...passed 00:09:08.645 Test: test_set_host_id_cmds ...[2024-04-17 12:53:12.766178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:08.645 passed 00:09:08.645 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:08.645 Test: test_io_raw_cmd ...passed 00:09:08.645 Test: test_io_raw_cmd_with_md ...passed 00:09:08.645 Test: test_namespace_attach ...passed 00:09:08.645 Test: test_namespace_detach ...passed 00:09:08.645 Test: test_namespace_create ...passed 00:09:08.645 Test: test_namespace_delete ...passed 00:09:08.645 Test: test_doorbell_buffer_config ...passed 00:09:08.645 Test: test_format_nvme ...passed 00:09:08.645 Test: test_fw_commit ...passed 00:09:08.645 Test: test_fw_image_download ...passed 00:09:08.645 Test: test_sanitize ...passed 00:09:08.645 Test: test_directive ...passed 00:09:08.645 Test: test_nvme_request_add_abort ...passed 00:09:08.645 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:08.645 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:08.645 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:08.645 00:09:08.645 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.645 suites 1 1 n/a 0 0 00:09:08.645 tests 24 24 24 0 0 00:09:08.645 asserts 198 198 198 0 n/a 00:09:08.645 00:09:08.645 Elapsed time = 0.002 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme_ctrlr_cmd 00:09:08.904 Test: test_geometry_cmd ...passed 00:09:08.904 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 2 2 2 0 0 00:09:08.904 asserts 7 7 7 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.000 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme 00:09:08.904 Test: test_nvme_ns_construct ...passed 00:09:08.904 Test: test_nvme_ns_uuid ...passed 00:09:08.904 Test: test_nvme_ns_csi ...passed 00:09:08.904 Test: test_nvme_ns_data ...passed 00:09:08.904 Test: test_nvme_ns_set_identify_data ...passed 00:09:08.904 Test: test_spdk_nvme_ns_get_values ...passed 00:09:08.904 Test: test_spdk_nvme_ns_is_active ...passed 00:09:08.904 Test: spdk_nvme_ns_supports ...passed 00:09:08.904 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:08.904 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:08.904 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:08.904 Test: test_nvme_ns_find_id_desc ...passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 12 12 12 0 0 00:09:08.904 asserts 83 83 83 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.001 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme_ns_cmd 00:09:08.904 Test: split_test ...passed 00:09:08.904 Test: split_test2 ...passed 00:09:08.904 Test: split_test3 ...passed 00:09:08.904 Test: split_test4 ...passed 00:09:08.904 Test: test_nvme_ns_cmd_flush ...passed 00:09:08.904 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:08.904 Test: test_nvme_ns_cmd_copy ...passed 00:09:08.904 Test: test_io_flags ...[2024-04-17 12:53:12.873926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:08.904 passed 00:09:08.904 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:08.904 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:08.904 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:08.904 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:08.904 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:08.904 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:08.904 Test: test_cmd_child_request ...passed 00:09:08.904 Test: test_nvme_ns_cmd_readv ...passed 00:09:08.904 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_writev ...[2024-04-17 12:53:12.876930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:08.904 passed 00:09:08.904 Test: test_nvme_ns_cmd_write_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_comparev ...passed 00:09:08.904 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:08.904 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:08.904 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:08.904 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:08.904 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-17 12:53:12.880509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:08.904 passed 00:09:08.904 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-17 12:53:12.880903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:08.904 passed 00:09:08.904 Test: test_nvme_ns_cmd_verify ...passed 00:09:08.904 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:08.904 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 32 32 32 0 0 00:09:08.904 asserts 550 550 550 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.005 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme_ns_cmd 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:08.904 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 12 12 12 0 0 00:09:08.904 asserts 123 123 123 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.002 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme_qpair 00:09:08.904 Test: test3 ...passed 00:09:08.904 Test: test_ctrlr_failed ...passed 00:09:08.904 Test: struct_packing ...passed 00:09:08.904 Test: test_nvme_qpair_process_completions ...[2024-04-17 12:53:12.958721] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:08.904 [2024-04-17 12:53:12.959167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:08.904 [2024-04-17 12:53:12.959363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:08.904 [2024-04-17 12:53:12.959553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:09:08.904 passed 00:09:08.904 Test: test_nvme_completion_is_retry ...passed 00:09:08.904 Test: test_get_status_string ...passed 00:09:08.904 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:09:08.904 Test: test_nvme_qpair_submit_request ...passed 00:09:08.904 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:08.904 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:08.904 Test: test_nvme_qpair_init_deinit ...[2024-04-17 12:53:12.961178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:08.904 passed 00:09:08.904 Test: test_nvme_get_sgl_print_info ...passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 12 12 12 0 0 00:09:08.904 asserts 154 154 154 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.002 seconds 00:09:08.904 12:53:12 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:08.904 00:09:08.904 00:09:08.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.904 http://cunit.sourceforge.net/ 00:09:08.904 00:09:08.904 00:09:08.904 Suite: nvme_pcie 00:09:08.904 Test: test_prp_list_append ...[2024-04-17 12:53:12.994026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:08.904 [2024-04-17 12:53:12.994385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:08.904 [2024-04-17 12:53:12.994532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:08.904 [2024-04-17 12:53:12.994769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:08.904 [2024-04-17 12:53:12.994952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:08.904 passed 00:09:08.904 Test: test_nvme_pcie_hotplug_monitor ...passed 00:09:08.904 Test: test_shadow_doorbell_update ...passed 00:09:08.904 Test: test_build_contig_hw_sgl_request ...passed 00:09:08.904 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:08.904 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:08.904 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:08.904 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-04-17 12:53:12.996052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:08.904 passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-04-17 12:53:12.996637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:08.904 passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-04-17 12:53:12.996958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:08.904 passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-04-17 12:53:12.997165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:08.904 passed 00:09:08.904 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-04-17 12:53:12.997539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:08.904 passed 00:09:08.904 00:09:08.904 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.904 suites 1 1 n/a 0 0 00:09:08.904 tests 14 14 14 0 0 00:09:08.904 asserts 235 235 235 0 n/a 00:09:08.904 00:09:08.904 Elapsed time = 0.002 seconds 00:09:08.904 12:53:13 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:08.904 00:09:08.904 00:09:08.905 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.905 http://cunit.sourceforge.net/ 00:09:08.905 00:09:08.905 00:09:08.905 Suite: nvme_ns_cmd 00:09:08.905 Test: nvme_poll_group_create_test ...passed 00:09:08.905 Test: nvme_poll_group_add_remove_test ...passed 00:09:08.905 Test: nvme_poll_group_process_completions ...passed 00:09:08.905 Test: nvme_poll_group_destroy_test ...passed 00:09:08.905 Test: nvme_poll_group_get_free_stats ...passed 00:09:08.905 00:09:08.905 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.905 suites 1 1 n/a 0 0 00:09:08.905 tests 5 5 5 0 0 00:09:08.905 asserts 75 75 75 0 n/a 00:09:08.905 00:09:08.905 Elapsed time = 0.001 seconds 00:09:08.905 12:53:13 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:09.164 00:09:09.164 00:09:09.164 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.164 http://cunit.sourceforge.net/ 00:09:09.164 00:09:09.164 00:09:09.164 Suite: nvme_quirks 00:09:09.164 Test: test_nvme_quirks_striping ...passed 00:09:09.164 00:09:09.164 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.164 suites 1 1 n/a 0 0 00:09:09.164 tests 1 1 1 0 0 00:09:09.164 asserts 5 5 5 0 n/a 00:09:09.164 00:09:09.164 Elapsed time = 0.000 seconds 00:09:09.164 12:53:13 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:09.164 00:09:09.164 00:09:09.164 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.164 http://cunit.sourceforge.net/ 00:09:09.164 00:09:09.164 00:09:09.164 Suite: nvme_tcp 00:09:09.164 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:09.164 Test: test_nvme_tcp_build_iovs ...passed 00:09:09.164 Test: test_nvme_tcp_build_sgl_request ...[2024-04-17 12:53:13.082424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffdc0ccfbb0, and the iovcnt=16, remaining_size=28672 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:09:09.164 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:09.164 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:09.164 Test: test_nvme_tcp_req_get ...passed 00:09:09.164 Test: test_nvme_tcp_req_init ...passed 00:09:09.164 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:09.164 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:09.164 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-04-17 12:53:13.085378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd18e0 is same with the state(6) to be set 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_alloc_reqs ...passed 00:09:09.164 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-04-17 12:53:13.086227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0a70 is same with the state(5) to be set 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-17 12:53:13.086628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffdc0cd15c0 00:09:09.164 [2024-04-17 12:53:13.086825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1223:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:09.164 [2024-04-17 12:53:13.087049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.087232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1174:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:09.164 [2024-04-17 12:53:13.087447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.087618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:09.164 [2024-04-17 12:53:13.087768] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.087993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.088166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.088363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.088527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.088713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0f30 is same with the state(5) to be set 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-17 12:53:13.089223] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:09.164 [2024-04-17 12:53:13.089398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:09.164 [2024-04-17 12:53:13.089776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:09.164 Test: test_nvme_tcp_c2h_payload_handle ...[2024-04-17 12:53:13.090328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdc0cd1100): PDU Sequence Error 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_icresp_handle ...[2024-04-17 12:53:13.090720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1564:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:09.164 [2024-04-17 12:53:13.090896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1571:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:09.164 [2024-04-17 12:53:13.091074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0a80 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.091252] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1580:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:09.164 [2024-04-17 12:53:13.091337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0a80 is same with the state(5) to be set 00:09:09.164 [2024-04-17 12:53:13.091621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0cd0a80 is same with the state(0) to be set 00:09:09.164 passed 00:09:09.164 Test: test_nvme_tcp_pdu_payload_handle ...[2024-04-17 12:53:13.092086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdc0cd15c0): PDU Sequence Error 00:09:09.164 passed 00:09:09.165 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-04-17 12:53:13.092505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffdc0ccfd50 00:09:09.165 passed 00:09:09.165 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:09:09.165 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-17 12:53:13.093327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffdc0ccf3d0, errno=0, rc=0 00:09:09.165 [2024-04-17 12:53:13.093495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0ccf3d0 is same with the state(5) to be set 00:09:09.165 [2024-04-17 12:53:13.093678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdc0ccf3d0 is same with the state(5) to be set 00:09:09.165 [2024-04-17 12:53:13.093850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdc0ccf3d0 (0): Success 00:09:09.165 [2024-04-17 12:53:13.094045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdc0ccf3d0 (0): Success 00:09:09.165 passed 00:09:09.165 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-17 12:53:13.214349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:09.165 [2024-04-17 12:53:13.214738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:09.165 passed 00:09:09.165 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:09.165 Test: test_nvme_tcp_poll_group_get_stats ...[2024-04-17 12:53:13.215509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:09.165 [2024-04-17 12:53:13.215697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:09.165 passed 00:09:09.165 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-17 12:53:13.216299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:09.165 [2024-04-17 12:53:13.216481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:09.165 [2024-04-17 12:53:13.216775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:09.165 [2024-04-17 12:53:13.216966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:09.165 [2024-04-17 12:53:13.217208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000000c40 with addr=192.168.1.78, port=23 00:09:09.165 [2024-04-17 12:53:13.217411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:09.165 passed 00:09:09.165 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-17 12:53:13.217900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:09:09.165 [2024-04-17 12:53:13.218095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1017:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:09.165 passed 00:09:09.165 00:09:09.165 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.165 suites 1 1 n/a 0 0 00:09:09.165 tests 27 27 27 0 0 00:09:09.165 asserts 624 624 624 0 n/a 00:09:09.165 00:09:09.165 Elapsed time = 0.127 seconds 00:09:09.165 12:53:13 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:09.165 00:09:09.165 00:09:09.165 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.165 http://cunit.sourceforge.net/ 00:09:09.165 00:09:09.165 00:09:09.165 Suite: nvme_transport 00:09:09.165 Test: test_nvme_get_transport ...passed 00:09:09.165 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:09.165 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:09.165 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:09.165 Test: test_ctrlr_get_memory_domains ...passed 00:09:09.165 00:09:09.165 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.165 suites 1 1 n/a 0 0 00:09:09.165 tests 5 5 5 0 0 00:09:09.165 asserts 28 28 28 0 n/a 00:09:09.165 00:09:09.165 Elapsed time = 0.000 seconds 00:09:09.165 12:53:13 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:09.165 00:09:09.165 00:09:09.165 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.165 http://cunit.sourceforge.net/ 00:09:09.165 00:09:09.165 00:09:09.165 Suite: nvme_io_msg 00:09:09.165 Test: test_nvme_io_msg_send ...passed 00:09:09.165 Test: test_nvme_io_msg_process ...passed 00:09:09.165 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:09.165 00:09:09.165 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.165 suites 1 1 n/a 0 0 00:09:09.165 tests 3 3 3 0 0 00:09:09.165 asserts 56 56 56 0 n/a 00:09:09.165 00:09:09.165 Elapsed time = 0.000 seconds 00:09:09.165 12:53:13 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:09.424 00:09:09.424 00:09:09.424 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.424 http://cunit.sourceforge.net/ 00:09:09.424 00:09:09.424 00:09:09.424 Suite: nvme_pcie_common 00:09:09.424 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-17 12:53:13.314764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:09.424 passed 00:09:09.424 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:09:09.424 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:09.424 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-17 12:53:13.316947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:09.424 [2024-04-17 12:53:13.317407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:09.424 [2024-04-17 12:53:13.317786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:09.424 passed 00:09:09.424 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:09:09.424 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-17 12:53:13.319205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:09.424 [2024-04-17 12:53:13.319464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:09.424 passed 00:09:09.424 00:09:09.424 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.424 suites 1 1 n/a 0 0 00:09:09.424 tests 6 6 6 0 0 00:09:09.424 asserts 148 148 148 0 n/a 00:09:09.424 00:09:09.424 Elapsed time = 0.002 seconds 00:09:09.424 12:53:13 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:09.424 00:09:09.424 00:09:09.424 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.424 http://cunit.sourceforge.net/ 00:09:09.424 00:09:09.424 00:09:09.424 Suite: nvme_fabric 00:09:09.424 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:09.424 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:09.424 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:09.424 Test: test_nvme_fabric_discover_probe ...passed 00:09:09.424 Test: test_nvme_fabric_qpair_connect ...[2024-04-17 12:53:13.358294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:09.424 passed 00:09:09.424 00:09:09.424 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.424 suites 1 1 n/a 0 0 00:09:09.424 tests 5 5 5 0 0 00:09:09.424 asserts 60 60 60 0 n/a 00:09:09.424 00:09:09.424 Elapsed time = 0.001 seconds 00:09:09.424 12:53:13 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:09.424 00:09:09.424 00:09:09.424 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.424 http://cunit.sourceforge.net/ 00:09:09.424 00:09:09.424 00:09:09.424 Suite: nvme_opal 00:09:09.424 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:09.424 Test: test_opal_add_short_atom_header ...[2024-04-17 12:53:13.388324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:09.424 passed 00:09:09.424 00:09:09.424 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.424 suites 1 1 n/a 0 0 00:09:09.424 tests 2 2 2 0 0 00:09:09.424 asserts 22 22 22 0 n/a 00:09:09.424 00:09:09.424 Elapsed time = 0.001 seconds 00:09:09.424 00:09:09.424 real 0m1.346s 00:09:09.424 user 0m0.705s 00:09:09.424 sys 0m0.424s 00:09:09.424 12:53:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:09.424 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.424 ************************************ 00:09:09.424 END TEST unittest_nvme 00:09:09.424 ************************************ 00:09:09.424 12:53:13 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:09.424 12:53:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:09.424 12:53:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:09.424 12:53:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.424 ************************************ 00:09:09.424 START TEST unittest_log 00:09:09.424 ************************************ 00:09:09.424 12:53:13 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:09.424 00:09:09.424 00:09:09.424 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.424 http://cunit.sourceforge.net/ 00:09:09.424 00:09:09.424 00:09:09.424 Suite: log 00:09:09.424 Test: log_test ...[2024-04-17 12:53:13.493893] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:09:09.424 [2024-04-17 12:53:13.494447] log_ut.c: 57:log_test: *DEBUG*: log test 00:09:09.424 log dump test: 00:09:09.424 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:09.424 spdk dump test: 00:09:09.424 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:09.424 spdk dump test: 00:09:09.424 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:09.424 00000010 65 20 63 68 61 72 73 e chars 00:09:09.424 passed 00:09:10.367 Test: deprecation ...passed 00:09:10.367 00:09:10.367 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.367 suites 1 1 n/a 0 0 00:09:10.367 tests 2 2 2 0 0 00:09:10.367 asserts 73 73 73 0 n/a 00:09:10.367 00:09:10.367 Elapsed time = 0.001 seconds 00:09:10.367 00:09:10.367 real 0m1.030s 00:09:10.367 user 0m0.013s 00:09:10.367 sys 0m0.015s 00:09:10.367 12:53:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:10.367 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.367 ************************************ 00:09:10.367 END TEST unittest_log 00:09:10.367 ************************************ 00:09:10.626 12:53:14 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:10.626 12:53:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:10.627 12:53:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.627 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.627 ************************************ 00:09:10.627 START TEST unittest_lvol 00:09:10.627 ************************************ 00:09:10.627 12:53:14 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:10.627 00:09:10.627 00:09:10.627 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.627 http://cunit.sourceforge.net/ 00:09:10.627 00:09:10.627 00:09:10.627 Suite: lvol 00:09:10.627 Test: lvs_init_unload_success ...[2024-04-17 12:53:14.605271] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:10.627 passed 00:09:10.627 Test: lvs_init_destroy_success ...[2024-04-17 12:53:14.606134] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:10.627 passed 00:09:10.627 Test: lvs_init_opts_success ...passed 00:09:10.627 Test: lvs_unload_lvs_is_null_fail ...[2024-04-17 12:53:14.606876] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:10.627 passed 00:09:10.627 Test: lvs_names ...[2024-04-17 12:53:14.607268] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:10.627 [2024-04-17 12:53:14.607430] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:10.627 [2024-04-17 12:53:14.607744] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:10.627 passed 00:09:10.627 Test: lvol_create_destroy_success ...passed 00:09:10.627 Test: lvol_create_fail ...[2024-04-17 12:53:14.608954] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:10.627 [2024-04-17 12:53:14.609187] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:10.627 passed 00:09:10.627 Test: lvol_destroy_fail ...[2024-04-17 12:53:14.609810] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:10.627 passed 00:09:10.627 Test: lvol_close ...[2024-04-17 12:53:14.610298] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:10.627 [2024-04-17 12:53:14.610465] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:10.627 passed 00:09:10.627 Test: lvol_resize ...passed 00:09:10.627 Test: lvol_set_read_only ...passed 00:09:10.627 Test: test_lvs_load ...[2024-04-17 12:53:14.612018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:10.627 [2024-04-17 12:53:14.612170] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:10.627 passed 00:09:10.627 Test: lvols_load ...[2024-04-17 12:53:14.612685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:10.627 [2024-04-17 12:53:14.612922] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:10.627 passed 00:09:10.627 Test: lvol_open ...passed 00:09:10.627 Test: lvol_snapshot ...passed 00:09:10.627 Test: lvol_snapshot_fail ...[2024-04-17 12:53:14.614366] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:10.627 passed 00:09:10.627 Test: lvol_clone ...passed 00:09:10.627 Test: lvol_clone_fail ...[2024-04-17 12:53:14.615395] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:10.627 passed 00:09:10.627 Test: lvol_iter_clones ...passed 00:09:10.627 Test: lvol_refcnt ...[2024-04-17 12:53:14.616440] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 4abd18d4-ed8c-42b0-8f4b-9abad9b2023e because it is still open 00:09:10.627 passed 00:09:10.627 Test: lvol_names ...[2024-04-17 12:53:14.616936] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:10.627 [2024-04-17 12:53:14.617133] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:10.627 [2024-04-17 12:53:14.617494] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:10.627 passed 00:09:10.627 Test: lvol_create_thin_provisioned ...passed 00:09:10.627 Test: lvol_rename ...[2024-04-17 12:53:14.618482] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:10.627 [2024-04-17 12:53:14.618696] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:10.627 passed 00:09:10.627 Test: lvs_rename ...[2024-04-17 12:53:14.619240] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:10.627 passed 00:09:10.627 Test: lvol_inflate ...[2024-04-17 12:53:14.619723] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:10.627 passed 00:09:10.627 Test: lvol_decouple_parent ...[2024-04-17 12:53:14.620287] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:10.627 passed 00:09:10.627 Test: lvol_get_xattr ...passed 00:09:10.627 Test: lvol_esnap_reload ...passed 00:09:10.627 Test: lvol_esnap_create_bad_args ...[2024-04-17 12:53:14.621434] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:10.627 [2024-04-17 12:53:14.621572] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:10.627 [2024-04-17 12:53:14.621724] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:10.627 [2024-04-17 12:53:14.621954] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:10.627 [2024-04-17 12:53:14.622199] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:10.627 passed 00:09:10.627 Test: lvol_esnap_create_delete ...passed 00:09:10.627 Test: lvol_esnap_load_esnaps ...[2024-04-17 12:53:14.623029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:10.627 passed 00:09:10.627 Test: lvol_esnap_missing ...[2024-04-17 12:53:14.623426] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:10.627 [2024-04-17 12:53:14.623566] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:10.627 passed 00:09:10.627 Test: lvol_esnap_hotplug ... 00:09:10.627 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:10.627 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:10.627 [2024-04-17 12:53:14.624860] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol aab037df-5c94-42d1-bb7c-7c16e653870f: failed to create esnap bs_dev: error -12 00:09:10.627 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:10.627 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:10.627 [2024-04-17 12:53:14.625407] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 136d8a4b-1bc0-4a29-aad8-0cf15357ea14: failed to create esnap bs_dev: error -12 00:09:10.627 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:10.627 [2024-04-17 12:53:14.625772] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 93381f7f-d5c0-4dd3-aa73-4dca9a2f0848: failed to create esnap bs_dev: error -12 00:09:10.627 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:10.627 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:10.627 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:10.627 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:10.627 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:10.627 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:10.627 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:10.627 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:10.627 passed 00:09:10.627 Test: lvol_get_by ...passed 00:09:10.627 00:09:10.627 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.627 suites 1 1 n/a 0 0 00:09:10.627 tests 34 34 34 0 0 00:09:10.627 asserts 1439 1439 1439 0 n/a 00:09:10.627 00:09:10.627 Elapsed time = 0.014 seconds 00:09:10.627 ************************************ 00:09:10.627 END TEST unittest_lvol 00:09:10.627 ************************************ 00:09:10.627 00:09:10.627 real 0m0.055s 00:09:10.627 user 0m0.015s 00:09:10.627 sys 0m0.031s 00:09:10.627 12:53:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:10.627 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.627 12:53:14 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:10.627 12:53:14 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:10.627 12:53:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:10.627 12:53:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.627 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.627 ************************************ 00:09:10.627 START TEST unittest_nvme_rdma 00:09:10.627 ************************************ 00:09:10.627 12:53:14 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:10.627 00:09:10.627 00:09:10.627 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.627 http://cunit.sourceforge.net/ 00:09:10.627 00:09:10.627 00:09:10.627 Suite: nvme_rdma 00:09:10.628 Test: test_nvme_rdma_build_sgl_request ...[2024-04-17 12:53:14.734387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:10.628 [2024-04-17 12:53:14.734826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:10.628 [2024-04-17 12:53:14.735020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:10.628 Test: test_nvme_rdma_build_contig_request ...[2024-04-17 12:53:14.735555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:10.628 Test: test_nvme_rdma_create_reqs ...[2024-04-17 12:53:14.735856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_create_rsps ...[2024-04-17 12:53:14.736478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-17 12:53:14.736927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:10.628 [2024-04-17 12:53:14.737095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_poller_create ...passed 00:09:10.628 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-04-17 12:53:14.737534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_ctrlr_construct ...passed 00:09:10.628 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:10.628 Test: test_nvme_rdma_req_init ...passed 00:09:10.628 Test: test_nvme_rdma_validate_cm_event ...[2024-04-17 12:53:14.738529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:10.628 [2024-04-17 12:53:14.738672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_qpair_init ...passed 00:09:10.628 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:10.628 Test: test_nvme_rdma_memory_domain ...[2024-04-17 12:53:14.739496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:09:10.628 passed 00:09:10.628 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:10.628 Test: test_rdma_get_memory_translation ...[2024-04-17 12:53:14.739769] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:10.628 [2024-04-17 12:53:14.739954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:10.628 passed 00:09:10.628 Test: test_get_rdma_qpair_from_wc ...passed 00:09:10.628 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:10.628 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-17 12:53:14.740382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:10.628 [2024-04-17 12:53:14.740527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:10.628 passed 00:09:10.628 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-17 12:53:14.740991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:10.628 [2024-04-17 12:53:14.741155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:10.628 [2024-04-17 12:53:14.741279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc9e438b30 on poll group 0x60c000000040 00:09:10.628 [2024-04-17 12:53:14.741445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:10.628 [2024-04-17 12:53:14.741575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:10.628 [2024-04-17 12:53:14.741704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc9e438b30 on poll group 0x60c000000040 00:09:10.628 [2024-04-17 12:53:14.741911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:10.628 passed 00:09:10.628 00:09:10.628 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.628 suites 1 1 n/a 0 0 00:09:10.628 tests 22 22 22 0 0 00:09:10.628 asserts 412 412 412 0 n/a 00:09:10.628 00:09:10.628 Elapsed time = 0.004 seconds 00:09:10.628 00:09:10.628 real 0m0.039s 00:09:10.628 user 0m0.018s 00:09:10.628 sys 0m0.017s 00:09:10.628 12:53:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:10.628 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.628 ************************************ 00:09:10.628 END TEST unittest_nvme_rdma 00:09:10.628 ************************************ 00:09:10.886 12:53:14 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:10.886 12:53:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:10.886 12:53:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.886 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.886 ************************************ 00:09:10.886 START TEST unittest_nvmf_transport 00:09:10.886 ************************************ 00:09:10.886 12:53:14 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:10.886 00:09:10.886 00:09:10.886 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.886 http://cunit.sourceforge.net/ 00:09:10.886 00:09:10.886 00:09:10.886 Suite: nvmf 00:09:10.886 Test: test_spdk_nvmf_transport_create ...[2024-04-17 12:53:14.857628] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 249:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:10.886 [2024-04-17 12:53:14.858018] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 269:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:10.886 [2024-04-17 12:53:14.858169] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 273:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:10.886 [2024-04-17 12:53:14.858374] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 256:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:10.886 passed 00:09:10.886 Test: test_nvmf_transport_poll_group_create ...passed 00:09:10.886 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-17 12:53:14.858996] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 790:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:10.886 [2024-04-17 12:53:14.859169] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 795:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:10.886 [2024-04-17 12:53:14.859280] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 800:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:10.886 passed 00:09:10.886 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:10.886 00:09:10.886 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.886 suites 1 1 n/a 0 0 00:09:10.886 tests 4 4 4 0 0 00:09:10.886 asserts 49 49 49 0 n/a 00:09:10.886 00:09:10.886 Elapsed time = 0.001 seconds 00:09:10.886 00:09:10.886 real 0m0.042s 00:09:10.886 user 0m0.024s 00:09:10.886 sys 0m0.016s 00:09:10.886 12:53:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:10.886 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.886 ************************************ 00:09:10.886 END TEST unittest_nvmf_transport 00:09:10.886 ************************************ 00:09:10.886 12:53:14 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:10.886 12:53:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:10.886 12:53:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.886 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.886 ************************************ 00:09:10.886 START TEST unittest_rdma 00:09:10.886 ************************************ 00:09:10.886 12:53:14 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:10.886 00:09:10.886 00:09:10.887 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.887 http://cunit.sourceforge.net/ 00:09:10.887 00:09:10.887 00:09:10.887 Suite: rdma_common 00:09:10.887 Test: test_spdk_rdma_pd ...[2024-04-17 12:53:14.972973] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:10.887 [2024-04-17 12:53:14.973469] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:10.887 passed 00:09:10.887 00:09:10.887 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.887 suites 1 1 n/a 0 0 00:09:10.887 tests 1 1 1 0 0 00:09:10.887 asserts 31 31 31 0 n/a 00:09:10.887 00:09:10.887 Elapsed time = 0.001 seconds 00:09:10.887 00:09:10.887 real 0m0.032s 00:09:10.887 user 0m0.022s 00:09:10.887 sys 0m0.009s 00:09:10.887 12:53:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:10.887 12:53:14 -- common/autotest_common.sh@10 -- # set +x 00:09:10.887 ************************************ 00:09:10.887 END TEST unittest_rdma 00:09:10.887 ************************************ 00:09:10.887 12:53:15 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:11.145 12:53:15 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:11.145 12:53:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:11.145 12:53:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:11.145 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:09:11.145 ************************************ 00:09:11.145 START TEST unittest_nvme_cuse 00:09:11.145 ************************************ 00:09:11.145 12:53:15 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:11.145 00:09:11.145 00:09:11.145 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.145 http://cunit.sourceforge.net/ 00:09:11.145 00:09:11.145 00:09:11.145 Suite: nvme_cuse 00:09:11.145 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:11.145 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:11.145 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:11.145 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:11.145 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:11.145 Test: test_cuse_nvme_submit_io ...[2024-04-17 12:53:15.084940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:11.145 passed 00:09:11.145 Test: test_cuse_nvme_reset ...[2024-04-17 12:53:15.085486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:11.145 passed 00:09:12.079 Test: test_nvme_cuse_stop ...passed 00:09:12.079 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:12.079 00:09:12.079 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.079 suites 1 1 n/a 0 0 00:09:12.079 tests 9 9 9 0 0 00:09:12.079 asserts 118 118 118 0 n/a 00:09:12.079 00:09:12.079 Elapsed time = 1.001 seconds 00:09:12.079 00:09:12.079 real 0m1.038s 00:09:12.079 user 0m0.585s 00:09:12.079 sys 0m0.448s 00:09:12.079 12:53:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:12.079 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.079 ************************************ 00:09:12.079 END TEST unittest_nvme_cuse 00:09:12.079 ************************************ 00:09:12.079 12:53:16 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:09:12.079 12:53:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:12.079 12:53:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:12.079 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.079 ************************************ 00:09:12.079 START TEST unittest_nvmf 00:09:12.079 ************************************ 00:09:12.079 12:53:16 -- common/autotest_common.sh@1099 -- # unittest_nvmf 00:09:12.079 12:53:16 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:12.079 00:09:12.079 00:09:12.079 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.079 http://cunit.sourceforge.net/ 00:09:12.080 00:09:12.080 00:09:12.080 Suite: nvmf 00:09:12.080 Test: test_get_log_page ...[2024-04-17 12:53:16.193227] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2562:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:12.080 passed 00:09:12.080 Test: test_process_fabrics_cmd ...passed 00:09:12.080 Test: test_connect ...[2024-04-17 12:53:16.194349] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 956:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:12.080 [2024-04-17 12:53:16.194558] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:12.080 [2024-04-17 12:53:16.194717] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 995:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:12.080 [2024-04-17 12:53:16.194847] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:12.080 [2024-04-17 12:53:16.195025] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 830:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:12.080 [2024-04-17 12:53:16.195174] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 837:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:12.080 [2024-04-17 12:53:16.195362] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 843:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:12.080 [2024-04-17 12:53:16.195499] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 870:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:12.080 [2024-04-17 12:53:16.195681] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:12.080 [2024-04-17 12:53:16.195884] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:12.080 [2024-04-17 12:53:16.196252] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 629:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:12.080 [2024-04-17 12:53:16.196431] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 635:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:12.080 [2024-04-17 12:53:16.196646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 642:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:12.080 [2024-04-17 12:53:16.196816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 665:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:12.080 [2024-04-17 12:53:16.196994] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 242:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:09:12.080 [2024-04-17 12:53:16.197228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:09:12.080 [2024-04-17 12:53:16.197405] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:09:12.080 passed 00:09:12.080 Test: test_get_ns_id_desc_list ...passed 00:09:12.080 Test: test_identify_ns ...[2024-04-17 12:53:16.198002] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:12.080 [2024-04-17 12:53:16.198350] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:12.080 [2024-04-17 12:53:16.198556] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:12.080 passed 00:09:12.080 Test: test_identify_ns_iocs_specific ...[2024-04-17 12:53:16.198919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:12.080 [2024-04-17 12:53:16.199233] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:12.080 passed 00:09:12.080 Test: test_reservation_write_exclusive ...passed 00:09:12.080 Test: test_reservation_exclusive_access ...passed 00:09:12.080 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:12.080 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:12.080 Test: test_reservation_notification_log_page ...passed 00:09:12.080 Test: test_get_dif_ctx ...passed 00:09:12.080 Test: test_set_get_features ...[2024-04-17 12:53:16.200910] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:12.080 [2024-04-17 12:53:16.201055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:12.080 [2024-04-17 12:53:16.201186] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1603:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:12.080 [2024-04-17 12:53:16.201307] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1679:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:12.080 passed 00:09:12.080 Test: test_identify_ctrlr ...passed 00:09:12.080 Test: test_identify_ctrlr_iocs_specific ...passed 00:09:12.080 Test: test_custom_admin_cmd ...passed 00:09:12.080 Test: test_fused_compare_and_write ...[2024-04-17 12:53:16.202143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4163:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:12.080 [2024-04-17 12:53:16.202323] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4152:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:12.080 [2024-04-17 12:53:16.202445] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4170:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:12.080 passed 00:09:12.080 Test: test_multi_async_event_reqs ...passed 00:09:12.080 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:12.080 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:12.080 Test: test_multi_async_events ...passed 00:09:12.080 Test: test_rae ...passed 00:09:12.080 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:12.080 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:12.080 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-17 12:53:16.204176] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4290:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:09:12.080 passed 00:09:12.080 Test: test_zcopy_read ...passed 00:09:12.080 Test: test_zcopy_write ...passed 00:09:12.080 Test: test_nvmf_property_set ...passed 00:09:12.080 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:09:12.080 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-17 12:53:16.204869] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:12.080 [2024-04-17 12:53:16.204937] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:12.080 [2024-04-17 12:53:16.205104] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1913:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:12.080 [2024-04-17 12:53:16.205179] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1919:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:12.080 [2024-04-17 12:53:16.205274] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1931:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:12.080 passed 00:09:12.080 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:09:12.080 00:09:12.080 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.080 suites 1 1 n/a 0 0 00:09:12.080 tests 31 31 31 0 0 00:09:12.080 asserts 951 951 951 0 n/a 00:09:12.080 00:09:12.080 Elapsed time = 0.006 seconds 00:09:12.080 12:53:16 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:12.340 00:09:12.340 00:09:12.340 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.340 http://cunit.sourceforge.net/ 00:09:12.340 00:09:12.340 00:09:12.340 Suite: nvmf 00:09:12.340 Test: test_get_rw_params ...passed 00:09:12.340 Test: test_lba_in_range ...passed 00:09:12.340 Test: test_get_dif_ctx ...passed 00:09:12.340 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:12.340 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-17 12:53:16.230635] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:12.340 [2024-04-17 12:53:16.230997] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:12.340 [2024-04-17 12:53:16.231171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-17 12:53:16.231455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:12.340 [2024-04-17 12:53:16.231565] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 960:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-17 12:53:16.231934] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:12.340 [2024-04-17 12:53:16.232078] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:12.340 [2024-04-17 12:53:16.232290] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:12.340 [2024-04-17 12:53:16.232461] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:12.340 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:12.340 00:09:12.340 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.340 suites 1 1 n/a 0 0 00:09:12.340 tests 9 9 9 0 0 00:09:12.340 asserts 157 157 157 0 n/a 00:09:12.340 00:09:12.340 Elapsed time = 0.001 seconds 00:09:12.340 12:53:16 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:12.340 00:09:12.340 00:09:12.340 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.340 http://cunit.sourceforge.net/ 00:09:12.340 00:09:12.340 00:09:12.340 Suite: nvmf 00:09:12.340 Test: test_discovery_log ...passed 00:09:12.340 Test: test_discovery_log_with_filters ...passed 00:09:12.340 00:09:12.340 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.340 suites 1 1 n/a 0 0 00:09:12.340 tests 2 2 2 0 0 00:09:12.340 asserts 238 238 238 0 n/a 00:09:12.340 00:09:12.340 Elapsed time = 0.002 seconds 00:09:12.340 12:53:16 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:12.340 00:09:12.340 00:09:12.340 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.340 http://cunit.sourceforge.net/ 00:09:12.340 00:09:12.340 00:09:12.340 Suite: nvmf 00:09:12.340 Test: nvmf_test_create_subsystem ...[2024-04-17 12:53:16.304614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:12.340 [2024-04-17 12:53:16.305057] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:12.340 [2024-04-17 12:53:16.305260] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:12.340 [2024-04-17 12:53:16.305407] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:12.340 [2024-04-17 12:53:16.305561] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:12.340 [2024-04-17 12:53:16.305713] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:12.340 [2024-04-17 12:53:16.305932] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:12.340 [2024-04-17 12:53:16.306206] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:12.340 [2024-04-17 12:53:16.306410] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:12.340 [2024-04-17 12:53:16.306553] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:12.340 [2024-04-17 12:53:16.306687] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:12.340 passed 00:09:12.340 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-17 12:53:16.307117] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1883:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:12.340 [2024-04-17 12:53:16.307335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1864:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:12.340 passed 00:09:12.340 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:12.340 Test: test_spdk_nvmf_ns_visible ...[2024-04-17 12:53:16.307923] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:09:12.340 passed 00:09:12.340 Test: test_reservation_register ...[2024-04-17 12:53:16.308646] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 [2024-04-17 12:53:16.308887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2973:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:12.340 passed 00:09:12.340 Test: test_reservation_register_with_ptpl ...passed 00:09:12.340 Test: test_reservation_acquire_preempt_1 ...[2024-04-17 12:53:16.310314] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:12.340 Test: test_reservation_release ...[2024-04-17 12:53:16.312591] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_unregister_notification ...[2024-04-17 12:53:16.313141] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_release_notification ...[2024-04-17 12:53:16.313656] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_release_notification_write_exclusive ...[2024-04-17 12:53:16.314154] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_clear_notification ...[2024-04-17 12:53:16.314643] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_reservation_preempt_notification ...[2024-04-17 12:53:16.315143] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2915:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:12.340 passed 00:09:12.340 Test: test_spdk_nvmf_ns_event ...passed 00:09:12.340 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:12.340 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:12.340 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-17 12:53:16.316615] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 262:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:12.340 [2024-04-17 12:53:16.316804] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 954:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_ns_reservation_report ...[2024-04-17 12:53:16.317198] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3278:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_nqn_is_valid ...[2024-04-17 12:53:16.317561] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:12.340 [2024-04-17 12:53:16.317713] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:a5140d7a-1eba-47c3-83ca-32549ea58ce": uuid is not the correct length 00:09:12.340 [2024-04-17 12:53:16.317891] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_ns_reservation_restore ...[2024-04-17 12:53:16.318254] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2472:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:12.340 passed 00:09:12.340 Test: test_nvmf_subsystem_state_change ...passed 00:09:12.340 Test: test_nvmf_reservation_custom_ops ...passed 00:09:12.340 00:09:12.340 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.340 suites 1 1 n/a 0 0 00:09:12.340 tests 23 23 23 0 0 00:09:12.340 asserts 482 482 482 0 n/a 00:09:12.340 00:09:12.340 Elapsed time = 0.009 seconds 00:09:12.340 12:53:16 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:12.340 00:09:12.341 00:09:12.341 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.341 http://cunit.sourceforge.net/ 00:09:12.341 00:09:12.341 00:09:12.341 Suite: nvmf 00:09:12.341 Test: test_nvmf_tcp_create ...[2024-04-17 12:53:16.379174] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 742:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:12.341 passed 00:09:12.341 Test: test_nvmf_tcp_destroy ...passed 00:09:12.341 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:12.341 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:12.341 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:12.341 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:12.341 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:12.341 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-17 12:53:16.466624] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.466801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.466966] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.467100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.467233] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 passed 00:09:12.341 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:12.341 Test: test_nvmf_tcp_icreq_handle ...[2024-04-17 12:53:16.467547] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:12.341 [2024-04-17 12:53:16.467713] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.467868] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.468011] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:12.341 [2024-04-17 12:53:16.468144] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.468260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.468319] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.468490] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.468715] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 passed 00:09:12.341 Test: test_nvmf_tcp_check_xfer_type ...passed 00:09:12.341 Test: test_nvmf_tcp_invalid_sgl ...[2024-04-17 12:53:16.469113] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2497:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:12.341 [2024-04-17 12:53:16.469253] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.469366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9725d80 is same with the state(5) to be set 00:09:12.341 passed 00:09:12.341 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-17 12:53:16.469537] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2229:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffcf9726ae0 00:09:12.341 [2024-04-17 12:53:16.469702] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.469839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.469976] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2286:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffcf9726240 00:09:12.341 [2024-04-17 12:53:16.470094] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.470153] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.470306] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2239:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:12.341 [2024-04-17 12:53:16.470367] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.470529] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.470669] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2278:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:12.341 [2024-04-17 12:53:16.470785] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.470909] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.471042] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.471188] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.471336] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.471452] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.471603] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.471720] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.471884] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.471995] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.472150] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.472289] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 [2024-04-17 12:53:16.472422] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:12.341 [2024-04-17 12:53:16.472475] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcf9726240 is same with the state(5) to be set 00:09:12.341 passed 00:09:12.600 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:09:12.600 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-17 12:53:16.488348] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:12.600 [2024-04-17 12:53:16.488433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:12.600 passed 00:09:12.600 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-17 12:53:16.488973] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:12.600 [2024-04-17 12:53:16.489110] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:12.600 passed 00:09:12.600 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-17 12:53:16.489501] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:12.600 [2024-04-17 12:53:16.489617] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:12.600 passed 00:09:12.600 00:09:12.600 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.600 suites 1 1 n/a 0 0 00:09:12.600 tests 17 17 17 0 0 00:09:12.600 asserts 222 222 222 0 n/a 00:09:12.600 00:09:12.600 Elapsed time = 0.125 seconds 00:09:12.600 12:53:16 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:12.600 00:09:12.600 00:09:12.600 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.600 http://cunit.sourceforge.net/ 00:09:12.600 00:09:12.600 00:09:12.600 Suite: nvmf 00:09:12.600 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:12.600 00:09:12.600 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.600 suites 1 1 n/a 0 0 00:09:12.600 tests 1 1 1 0 0 00:09:12.600 asserts 17 17 17 0 n/a 00:09:12.600 00:09:12.600 Elapsed time = 0.018 seconds 00:09:12.600 ************************************ 00:09:12.600 END TEST unittest_nvmf 00:09:12.600 ************************************ 00:09:12.600 00:09:12.600 real 0m0.450s 00:09:12.600 user 0m0.170s 00:09:12.600 sys 0m0.257s 00:09:12.600 12:53:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:12.600 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.600 12:53:16 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:12.600 12:53:16 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:12.600 12:53:16 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:12.600 12:53:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:12.600 12:53:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:12.600 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.600 ************************************ 00:09:12.600 START TEST unittest_nvmf_rdma 00:09:12.600 ************************************ 00:09:12.600 12:53:16 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:12.600 00:09:12.600 00:09:12.600 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.600 http://cunit.sourceforge.net/ 00:09:12.600 00:09:12.600 00:09:12.600 Suite: nvmf 00:09:12.600 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-17 12:53:16.731393] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:12.600 [2024-04-17 12:53:16.731961] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:12.600 [2024-04-17 12:53:16.732178] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:12.600 passed 00:09:12.600 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:12.600 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:12.600 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:12.600 Test: test_nvmf_rdma_opts_init ...passed 00:09:12.600 Test: test_nvmf_rdma_request_free_data ...passed 00:09:12.600 Test: test_nvmf_rdma_update_ibv_state ...[2024-04-17 12:53:16.735251] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:09:12.600 [2024-04-17 12:53:16.735482] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:09:12.600 passed 00:09:12.600 Test: test_nvmf_rdma_resources_create ...passed 00:09:12.600 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:12.600 Test: test_nvmf_rdma_resize_cq ...[2024-04-17 12:53:16.737750] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:12.601 Using CQ of insufficient size may lead to CQ overrun 00:09:12.601 [2024-04-17 12:53:16.738057] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:12.601 [2024-04-17 12:53:16.738252] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:12.601 passed 00:09:12.601 00:09:12.601 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.601 suites 1 1 n/a 0 0 00:09:12.601 tests 10 10 10 0 0 00:09:12.601 asserts 584 584 584 0 n/a 00:09:12.601 00:09:12.601 Elapsed time = 0.005 seconds 00:09:12.859 00:09:12.859 real 0m0.045s 00:09:12.859 user 0m0.030s 00:09:12.859 sys 0m0.013s 00:09:12.859 12:53:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:12.859 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.859 ************************************ 00:09:12.859 END TEST unittest_nvmf_rdma 00:09:12.859 ************************************ 00:09:12.859 12:53:16 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:12.859 12:53:16 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:09:12.859 12:53:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:12.859 12:53:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:12.859 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.859 ************************************ 00:09:12.859 START TEST unittest_scsi 00:09:12.859 ************************************ 00:09:12.859 12:53:16 -- common/autotest_common.sh@1099 -- # unittest_scsi 00:09:12.859 12:53:16 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:12.859 00:09:12.859 00:09:12.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.859 http://cunit.sourceforge.net/ 00:09:12.859 00:09:12.859 00:09:12.859 Suite: dev_suite 00:09:12.859 Test: dev_destruct_null_dev ...passed 00:09:12.859 Test: dev_destruct_zero_luns ...passed 00:09:12.859 Test: dev_destruct_null_lun ...passed 00:09:12.859 Test: dev_destruct_success ...passed 00:09:12.859 Test: dev_construct_num_luns_zero ...[2024-04-17 12:53:16.836241] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:12.859 passed 00:09:12.859 Test: dev_construct_no_lun_zero ...[2024-04-17 12:53:16.837091] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:12.859 passed 00:09:12.859 Test: dev_construct_null_lun ...[2024-04-17 12:53:16.837617] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:12.859 passed 00:09:12.859 Test: dev_construct_name_too_long ...[2024-04-17 12:53:16.838055] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:12.859 passed 00:09:12.859 Test: dev_construct_success ...passed 00:09:12.859 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:12.859 Test: dev_queue_mgmt_task_success ...passed 00:09:12.859 Test: dev_queue_task_success ...passed 00:09:12.859 Test: dev_stop_success ...passed 00:09:12.859 Test: dev_add_port_max_ports ...[2024-04-17 12:53:16.839961] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:12.859 passed 00:09:12.859 Test: dev_add_port_construct_failure1 ...[2024-04-17 12:53:16.840480] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:12.859 passed 00:09:12.859 Test: dev_add_port_construct_failure2 ...[2024-04-17 12:53:16.841019] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:12.859 passed 00:09:12.859 Test: dev_add_port_success1 ...passed 00:09:12.859 Test: dev_add_port_success2 ...passed 00:09:12.859 Test: dev_add_port_success3 ...passed 00:09:12.859 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:12.859 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:12.859 Test: dev_find_port_by_id_success ...passed 00:09:12.859 Test: dev_add_lun_bdev_not_found ...passed 00:09:12.859 Test: dev_add_lun_no_free_lun_id ...[2024-04-17 12:53:16.843236] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:12.859 passed 00:09:12.859 Test: dev_add_lun_success1 ...passed 00:09:12.859 Test: dev_add_lun_success2 ...passed 00:09:12.859 Test: dev_check_pending_tasks ...passed 00:09:12.859 Test: dev_iterate_luns ...passed 00:09:12.859 Test: dev_find_free_lun ...passed 00:09:12.859 00:09:12.859 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.859 suites 1 1 n/a 0 0 00:09:12.859 tests 29 29 29 0 0 00:09:12.859 asserts 97 97 97 0 n/a 00:09:12.859 00:09:12.859 Elapsed time = 0.004 seconds 00:09:12.859 12:53:16 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:12.859 00:09:12.859 00:09:12.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.859 http://cunit.sourceforge.net/ 00:09:12.859 00:09:12.859 00:09:12.859 Suite: lun_suite 00:09:12.859 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-17 12:53:16.879912] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:12.859 passed 00:09:12.859 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-17 12:53:16.880536] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:12.859 passed 00:09:12.859 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:12.859 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:12.859 Test: lun_task_mgmt_execute_invalid_case ...[2024-04-17 12:53:16.881327] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:12.859 passed 00:09:12.859 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:12.859 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:12.859 Test: lun_append_task_null_lun_not_supported ...passed 00:09:12.859 Test: lun_execute_scsi_task_pending ...passed 00:09:12.859 Test: lun_execute_scsi_task_complete ...passed 00:09:12.859 Test: lun_execute_scsi_task_resize ...passed 00:09:12.859 Test: lun_destruct_success ...passed 00:09:12.859 Test: lun_construct_null_ctx ...[2024-04-17 12:53:16.883004] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:12.859 passed 00:09:12.859 Test: lun_construct_success ...passed 00:09:12.859 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:12.859 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:12.859 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:12.859 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:12.859 00:09:12.859 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.859 suites 1 1 n/a 0 0 00:09:12.859 tests 18 18 18 0 0 00:09:12.859 asserts 153 153 153 0 n/a 00:09:12.859 00:09:12.859 Elapsed time = 0.002 seconds 00:09:12.859 12:53:16 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:12.859 00:09:12.859 00:09:12.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.859 http://cunit.sourceforge.net/ 00:09:12.859 00:09:12.859 00:09:12.859 Suite: scsi_suite 00:09:12.859 Test: scsi_init ...passed 00:09:12.859 00:09:12.859 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.859 suites 1 1 n/a 0 0 00:09:12.859 tests 1 1 1 0 0 00:09:12.859 asserts 1 1 1 0 n/a 00:09:12.859 00:09:12.859 Elapsed time = 0.000 seconds 00:09:12.859 12:53:16 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:12.859 00:09:12.859 00:09:12.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.859 http://cunit.sourceforge.net/ 00:09:12.859 00:09:12.859 00:09:12.859 Suite: translation_suite 00:09:12.859 Test: mode_select_6_test ...passed 00:09:12.859 Test: mode_select_6_test2 ...passed 00:09:12.859 Test: mode_sense_6_test ...passed 00:09:12.859 Test: mode_sense_10_test ...passed 00:09:12.859 Test: inquiry_evpd_test ...passed 00:09:12.859 Test: inquiry_standard_test ...passed 00:09:12.859 Test: inquiry_overflow_test ...passed 00:09:12.859 Test: task_complete_test ...passed 00:09:12.859 Test: lba_range_test ...passed 00:09:12.859 Test: xfer_len_test ...[2024-04-17 12:53:16.947952] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:12.859 passed 00:09:12.859 Test: xfer_test ...passed 00:09:12.859 Test: scsi_name_padding_test ...passed 00:09:12.859 Test: get_dif_ctx_test ...passed 00:09:12.859 Test: unmap_split_test ...passed 00:09:12.859 00:09:12.859 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.859 suites 1 1 n/a 0 0 00:09:12.859 tests 14 14 14 0 0 00:09:12.859 asserts 1205 1205 1205 0 n/a 00:09:12.859 00:09:12.859 Elapsed time = 0.005 seconds 00:09:12.859 12:53:16 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:12.859 00:09:12.859 00:09:12.859 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.859 http://cunit.sourceforge.net/ 00:09:12.859 00:09:12.859 00:09:12.859 Suite: reservation_suite 00:09:12.859 Test: test_reservation_register ...[2024-04-17 12:53:16.981998] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.859 passed 00:09:12.859 Test: test_reservation_reserve ...[2024-04-17 12:53:16.983092] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.859 [2024-04-17 12:53:16.983425] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:12.859 [2024-04-17 12:53:16.983765] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:12.859 passed 00:09:12.859 Test: test_reservation_preempt_non_all_regs ...[2024-04-17 12:53:16.984404] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.859 [2024-04-17 12:53:16.984726] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:12.859 passed 00:09:12.859 Test: test_reservation_preempt_all_regs ...[2024-04-17 12:53:16.985326] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.859 passed 00:09:12.859 Test: test_reservation_cmds_conflict ...[2024-04-17 12:53:16.985715] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.859 [2024-04-17 12:53:16.985903] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:12.859 [2024-04-17 12:53:16.986041] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:12.860 [2024-04-17 12:53:16.986098] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:12.860 [2024-04-17 12:53:16.986226] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:12.860 [2024-04-17 12:53:16.986336] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:12.860 passed 00:09:12.860 Test: test_scsi2_reserve_release ...passed 00:09:12.860 Test: test_pr_with_scsi2_reserve_release ...[2024-04-17 12:53:16.986856] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:12.860 passed 00:09:12.860 00:09:12.860 Run Summary: Type Total Ran Passed Failed Inactive 00:09:12.860 suites 1 1 n/a 0 0 00:09:12.860 tests 7 7 7 0 0 00:09:12.860 asserts 257 257 257 0 n/a 00:09:12.860 00:09:12.860 Elapsed time = 0.003 seconds 00:09:13.118 00:09:13.118 real 0m0.177s 00:09:13.118 user 0m0.089s 00:09:13.118 sys 0m0.073s 00:09:13.118 12:53:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:13.118 12:53:16 -- common/autotest_common.sh@10 -- # set +x 00:09:13.118 ************************************ 00:09:13.118 END TEST unittest_scsi 00:09:13.118 ************************************ 00:09:13.118 12:53:17 -- unit/unittest.sh@276 -- # uname -s 00:09:13.118 12:53:17 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:09:13.118 12:53:17 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:09:13.118 12:53:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:13.118 12:53:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:13.118 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.118 ************************************ 00:09:13.118 START TEST unittest_sock 00:09:13.118 ************************************ 00:09:13.118 12:53:17 -- common/autotest_common.sh@1099 -- # unittest_sock 00:09:13.118 12:53:17 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:13.118 00:09:13.118 00:09:13.118 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.118 http://cunit.sourceforge.net/ 00:09:13.118 00:09:13.118 00:09:13.118 Suite: sock 00:09:13.118 Test: posix_sock ...passed 00:09:13.118 Test: ut_sock ...passed 00:09:13.118 Test: posix_sock_group ...passed 00:09:13.118 Test: ut_sock_group ...passed 00:09:13.118 Test: posix_sock_group_fairness ...passed 00:09:13.118 Test: _posix_sock_close ...passed 00:09:13.118 Test: sock_get_default_opts ...passed 00:09:13.118 Test: ut_sock_impl_get_set_opts ...passed 00:09:13.118 Test: posix_sock_impl_get_set_opts ...passed 00:09:13.118 Test: ut_sock_map ...passed 00:09:13.118 Test: override_impl_opts ...passed 00:09:13.118 Test: ut_sock_group_get_ctx ...passed 00:09:13.118 00:09:13.118 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.118 suites 1 1 n/a 0 0 00:09:13.118 tests 12 12 12 0 0 00:09:13.118 asserts 349 349 349 0 n/a 00:09:13.118 00:09:13.118 Elapsed time = 0.007 seconds 00:09:13.118 12:53:17 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:13.118 00:09:13.118 00:09:13.118 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.118 http://cunit.sourceforge.net/ 00:09:13.118 00:09:13.118 00:09:13.118 Suite: posix 00:09:13.118 Test: flush ...passed 00:09:13.118 00:09:13.118 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.118 suites 1 1 n/a 0 0 00:09:13.118 tests 1 1 1 0 0 00:09:13.118 asserts 28 28 28 0 n/a 00:09:13.118 00:09:13.118 Elapsed time = 0.000 seconds 00:09:13.118 12:53:17 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:13.118 00:09:13.118 real 0m0.110s 00:09:13.118 user 0m0.048s 00:09:13.118 sys 0m0.035s 00:09:13.118 12:53:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:13.118 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.118 ************************************ 00:09:13.118 END TEST unittest_sock 00:09:13.118 ************************************ 00:09:13.118 12:53:17 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:13.119 12:53:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:13.119 12:53:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:13.119 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.119 ************************************ 00:09:13.119 START TEST unittest_thread 00:09:13.119 ************************************ 00:09:13.119 12:53:17 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:13.377 00:09:13.377 00:09:13.377 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.377 http://cunit.sourceforge.net/ 00:09:13.377 00:09:13.377 00:09:13.377 Suite: io_channel 00:09:13.377 Test: thread_alloc ...passed 00:09:13.377 Test: thread_send_msg ...passed 00:09:13.377 Test: thread_poller ...passed 00:09:13.377 Test: poller_pause ...passed 00:09:13.377 Test: thread_for_each ...passed 00:09:13.377 Test: for_each_channel_remove ...passed 00:09:13.377 Test: for_each_channel_unreg ...[2024-04-17 12:53:17.295752] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7fff214cd6b0 already registered (old:0x613000000200 new:0x6130000003c0) 00:09:13.377 passed 00:09:13.377 Test: thread_name ...passed 00:09:13.377 Test: channel ...[2024-04-17 12:53:17.300305] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x556eccb6e300 00:09:13.377 passed 00:09:13.377 Test: channel_destroy_races ...passed 00:09:13.377 Test: thread_exit_test ...[2024-04-17 12:53:17.305903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:09:13.377 passed 00:09:13.377 Test: thread_update_stats_test ...passed 00:09:13.377 Test: nested_channel ...passed 00:09:13.377 Test: device_unregister_and_thread_exit_race ...passed 00:09:13.377 Test: cache_closest_timed_poller ...passed 00:09:13.377 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:13.377 Test: io_device_lookup ...passed 00:09:13.377 Test: spdk_spin ...[2024-04-17 12:53:17.318109] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:13.377 [2024-04-17 12:53:17.318254] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff214cd6a0 00:09:13.377 [2024-04-17 12:53:17.318377] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:13.377 [2024-04-17 12:53:17.320124] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:13.377 [2024-04-17 12:53:17.320305] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff214cd6a0 00:09:13.378 [2024-04-17 12:53:17.320451] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:13.378 [2024-04-17 12:53:17.320579] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff214cd6a0 00:09:13.378 [2024-04-17 12:53:17.320716] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:13.378 [2024-04-17 12:53:17.320909] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff214cd6a0 00:09:13.378 [2024-04-17 12:53:17.321070] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:13.378 [2024-04-17 12:53:17.321247] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7fff214cd6a0 00:09:13.378 passed 00:09:13.378 Test: for_each_channel_and_thread_exit_race ...passed 00:09:13.378 Test: for_each_thread_and_thread_exit_race ...passed 00:09:13.378 00:09:13.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.378 suites 1 1 n/a 0 0 00:09:13.378 tests 20 20 20 0 0 00:09:13.378 asserts 409 409 409 0 n/a 00:09:13.378 00:09:13.378 Elapsed time = 0.050 seconds 00:09:13.378 ************************************ 00:09:13.378 END TEST unittest_thread 00:09:13.378 ************************************ 00:09:13.378 00:09:13.378 real 0m0.092s 00:09:13.378 user 0m0.058s 00:09:13.378 sys 0m0.028s 00:09:13.378 12:53:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:13.378 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.378 12:53:17 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:13.378 12:53:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:13.378 12:53:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:13.378 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.378 ************************************ 00:09:13.378 START TEST unittest_iobuf 00:09:13.378 ************************************ 00:09:13.378 12:53:17 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:13.378 00:09:13.378 00:09:13.378 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.378 http://cunit.sourceforge.net/ 00:09:13.378 00:09:13.378 00:09:13.378 Suite: io_channel 00:09:13.378 Test: iobuf ...passed 00:09:13.378 Test: iobuf_cache ...[2024-04-17 12:53:17.435096] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:13.378 [2024-04-17 12:53:17.435456] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:13.378 [2024-04-17 12:53:17.435757] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:13.378 [2024-04-17 12:53:17.435985] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:13.378 [2024-04-17 12:53:17.436198] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:13.378 [2024-04-17 12:53:17.436350] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:13.378 passed 00:09:13.378 00:09:13.378 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.378 suites 1 1 n/a 0 0 00:09:13.378 tests 2 2 2 0 0 00:09:13.378 asserts 107 107 107 0 n/a 00:09:13.378 00:09:13.378 Elapsed time = 0.006 seconds 00:09:13.378 00:09:13.378 real 0m0.039s 00:09:13.378 user 0m0.027s 00:09:13.378 sys 0m0.011s 00:09:13.378 12:53:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:13.378 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.378 ************************************ 00:09:13.378 END TEST unittest_iobuf 00:09:13.378 ************************************ 00:09:13.378 12:53:17 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:09:13.378 12:53:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:13.378 12:53:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:13.378 12:53:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.637 ************************************ 00:09:13.637 START TEST unittest_util 00:09:13.637 ************************************ 00:09:13.637 12:53:17 -- common/autotest_common.sh@1099 -- # unittest_util 00:09:13.637 12:53:17 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: base64 00:09:13.637 Test: test_base64_get_encoded_strlen ...passed 00:09:13.637 Test: test_base64_get_decoded_len ...passed 00:09:13.637 Test: test_base64_encode ...passed 00:09:13.637 Test: test_base64_decode ...passed 00:09:13.637 Test: test_base64_urlsafe_encode ...passed 00:09:13.637 Test: test_base64_urlsafe_decode ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 6 6 6 0 0 00:09:13.637 asserts 112 112 112 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.000 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: bit_array 00:09:13.637 Test: test_1bit ...passed 00:09:13.637 Test: test_64bit ...passed 00:09:13.637 Test: test_find ...passed 00:09:13.637 Test: test_resize ...passed 00:09:13.637 Test: test_errors ...passed 00:09:13.637 Test: test_count ...passed 00:09:13.637 Test: test_mask_store_load ...passed 00:09:13.637 Test: test_mask_clear ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 8 8 8 0 0 00:09:13.637 asserts 5075 5075 5075 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.001 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: cpuset 00:09:13.637 Test: test_cpuset ...passed 00:09:13.637 Test: test_cpuset_parse ...[2024-04-17 12:53:17.600428] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:13.637 [2024-04-17 12:53:17.600808] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:13.637 [2024-04-17 12:53:17.600972] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:13.637 [2024-04-17 12:53:17.601130] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:13.637 [2024-04-17 12:53:17.601235] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:13.637 [2024-04-17 12:53:17.601351] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:13.637 [2024-04-17 12:53:17.601401] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:13.637 [2024-04-17 12:53:17.601526] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:13.637 passed 00:09:13.637 Test: test_cpuset_fmt ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 3 3 3 0 0 00:09:13.637 asserts 65 65 65 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.002 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: crc16 00:09:13.637 Test: test_crc16_t10dif ...passed 00:09:13.637 Test: test_crc16_t10dif_seed ...passed 00:09:13.637 Test: test_crc16_t10dif_copy ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 3 3 3 0 0 00:09:13.637 asserts 5 5 5 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.000 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: crc32_ieee 00:09:13.637 Test: test_crc32_ieee ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 1 1 1 0 0 00:09:13.637 asserts 1 1 1 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.000 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: crc32c 00:09:13.637 Test: test_crc32c ...passed 00:09:13.637 Test: test_crc32c_nvme ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 2 2 2 0 0 00:09:13.637 asserts 16 16 16 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.001 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: crc64 00:09:13.637 Test: test_crc64_nvme ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 1 1 1 0 0 00:09:13.637 asserts 4 4 4 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.001 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: string 00:09:13.637 Test: test_parse_ip_addr ...passed 00:09:13.637 Test: test_str_chomp ...passed 00:09:13.637 Test: test_parse_capacity ...passed 00:09:13.637 Test: test_sprintf_append_realloc ...passed 00:09:13.637 Test: test_strtol ...passed 00:09:13.637 Test: test_strtoll ...passed 00:09:13.637 Test: test_strarray ...passed 00:09:13.637 Test: test_strcpy_replace ...passed 00:09:13.637 00:09:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.637 suites 1 1 n/a 0 0 00:09:13.637 tests 8 8 8 0 0 00:09:13.637 asserts 161 161 161 0 n/a 00:09:13.637 00:09:13.637 Elapsed time = 0.001 seconds 00:09:13.637 12:53:17 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:13.637 00:09:13.637 00:09:13.637 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.637 http://cunit.sourceforge.net/ 00:09:13.637 00:09:13.637 00:09:13.637 Suite: dif 00:09:13.637 Test: dif_generate_and_verify_test ...[2024-04-17 12:53:17.778358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:13.637 [2024-04-17 12:53:17.779028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:13.637 [2024-04-17 12:53:17.779531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:13.898 [2024-04-17 12:53:17.780023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:13.898 [2024-04-17 12:53:17.780498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:13.898 [2024-04-17 12:53:17.780947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:13.898 passed 00:09:13.898 Test: dif_disable_check_test ...[2024-04-17 12:53:17.782330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:13.898 [2024-04-17 12:53:17.782774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:13.898 [2024-04-17 12:53:17.783187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:13.898 passed 00:09:13.898 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-17 12:53:17.784672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:13.898 [2024-04-17 12:53:17.785147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:13.898 [2024-04-17 12:53:17.785607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:13.898 [2024-04-17 12:53:17.786101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:13.898 [2024-04-17 12:53:17.786573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:13.898 [2024-04-17 12:53:17.787023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:13.898 [2024-04-17 12:53:17.787484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:13.898 [2024-04-17 12:53:17.787941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:13.898 [2024-04-17 12:53:17.788391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:13.898 [2024-04-17 12:53:17.788869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:13.898 [2024-04-17 12:53:17.789336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:13.898 passed 00:09:13.898 Test: dif_apptag_mask_test ...[2024-04-17 12:53:17.789978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:13.898 [2024-04-17 12:53:17.790408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:13.898 passed 00:09:13.898 Test: dif_sec_512_md_0_error_test ...[2024-04-17 12:53:17.790961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:13.898 passed 00:09:13.898 Test: dif_sec_4096_md_0_error_test ...[2024-04-17 12:53:17.791314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:13.898 [2024-04-17 12:53:17.791459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:13.898 passed 00:09:13.898 Test: dif_sec_4100_md_128_error_test ...[2024-04-17 12:53:17.791904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:13.898 [2024-04-17 12:53:17.792051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:13.898 passed 00:09:13.898 Test: dif_guard_seed_test ...passed 00:09:13.898 Test: dif_guard_value_test ...passed 00:09:13.898 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:13.898 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:13.898 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-17 12:53:17.840149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=7d4c, Actual=fd4c 00:09:13.898 [2024-04-17 12:53:17.842709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=7e21, Actual=fe21 00:09:13.898 [2024-04-17 12:53:17.845303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.898 [2024-04-17 12:53:17.847886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.898 [2024-04-17 12:53:17.850471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.898 [2024-04-17 12:53:17.853065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.898 [2024-04-17 12:53:17.855619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=f049 00:09:13.898 [2024-04-17 12:53:17.857186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=d6f1 00:09:13.898 [2024-04-17 12:53:17.858736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=9ab753ed, Actual=1ab753ed 00:09:13.898 [2024-04-17 12:53:17.861317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=b8574660, Actual=38574660 00:09:13.898 [2024-04-17 12:53:17.863919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.898 [2024-04-17 12:53:17.866471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.898 [2024-04-17 12:53:17.869058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.898 [2024-04-17 12:53:17.871611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.899 [2024-04-17 12:53:17.874190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.899 [2024-04-17 12:53:17.875733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=91b46f6b 00:09:13.899 [2024-04-17 12:53:17.877337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.899 [2024-04-17 12:53:17.879938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.899 [2024-04-17 12:53:17.882491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.885089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.887657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a 00:09:13.899 [2024-04-17 12:53:17.890231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a 00:09:13.899 [2024-04-17 12:53:17.892833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.899 [2024-04-17 12:53:17.894382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.899 passed 00:09:13.899 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-17 12:53:17.895309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:09:13.899 [2024-04-17 12:53:17.895747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:09:13.899 [2024-04-17 12:53:17.896205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.896641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.897113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.897536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.897974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=f049 00:09:13.899 [2024-04-17 12:53:17.898315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d6f1 00:09:13.899 [2024-04-17 12:53:17.898682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9ab753ed, Actual=1ab753ed 00:09:13.899 [2024-04-17 12:53:17.899114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b8574660, Actual=38574660 00:09:13.899 [2024-04-17 12:53:17.899565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.900013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.900449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.900892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.901331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.899 [2024-04-17 12:53:17.901675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=91b46f6b 00:09:13.899 [2024-04-17 12:53:17.902061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.899 [2024-04-17 12:53:17.902487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.899 [2024-04-17 12:53:17.902927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.903355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.903830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.899 [2024-04-17 12:53:17.904269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.899 [2024-04-17 12:53:17.904733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.899 [2024-04-17 12:53:17.905100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.899 passed 00:09:13.899 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-17 12:53:17.905700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:09:13.899 [2024-04-17 12:53:17.906144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:09:13.899 [2024-04-17 12:53:17.906576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.907003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.907452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.907913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.908343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=f049 00:09:13.899 [2024-04-17 12:53:17.908705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d6f1 00:09:13.899 [2024-04-17 12:53:17.909053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9ab753ed, Actual=1ab753ed 00:09:13.899 [2024-04-17 12:53:17.909494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b8574660, Actual=38574660 00:09:13.899 [2024-04-17 12:53:17.909928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.910353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.910785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.911209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.911636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.899 [2024-04-17 12:53:17.912004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=91b46f6b 00:09:13.899 [2024-04-17 12:53:17.912378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.899 [2024-04-17 12:53:17.912817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.899 [2024-04-17 12:53:17.913256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.913685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.914116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.899 [2024-04-17 12:53:17.914534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.899 [2024-04-17 12:53:17.914990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.899 [2024-04-17 12:53:17.915334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.899 passed 00:09:13.899 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-17 12:53:17.915941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:09:13.899 [2024-04-17 12:53:17.916400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:09:13.899 [2024-04-17 12:53:17.916867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.917297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.917771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.918199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.918619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=f049 00:09:13.899 [2024-04-17 12:53:17.918966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d6f1 00:09:13.899 [2024-04-17 12:53:17.919339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9ab753ed, Actual=1ab753ed 00:09:13.899 [2024-04-17 12:53:17.919767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b8574660, Actual=38574660 00:09:13.899 [2024-04-17 12:53:17.920258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.920684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.899 [2024-04-17 12:53:17.921135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.921567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.899 [2024-04-17 12:53:17.921982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.899 [2024-04-17 12:53:17.922330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=91b46f6b 00:09:13.900 [2024-04-17 12:53:17.922706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.900 [2024-04-17 12:53:17.923143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.900 [2024-04-17 12:53:17.923569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.924017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.924455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.924912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.925367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.900 [2024-04-17 12:53:17.925727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.900 passed 00:09:13.900 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-17 12:53:17.926326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:09:13.900 [2024-04-17 12:53:17.926743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:09:13.900 [2024-04-17 12:53:17.927177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.927603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.928065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.928491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.928934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=f049 00:09:13.900 [2024-04-17 12:53:17.929298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d6f1 00:09:13.900 passed 00:09:13.900 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-17 12:53:17.929874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9ab753ed, Actual=1ab753ed 00:09:13.900 [2024-04-17 12:53:17.930296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b8574660, Actual=38574660 00:09:13.900 [2024-04-17 12:53:17.930748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.931179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.931609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.932083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.932532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.900 [2024-04-17 12:53:17.932916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=91b46f6b 00:09:13.900 [2024-04-17 12:53:17.933323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.900 [2024-04-17 12:53:17.933764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.900 [2024-04-17 12:53:17.934195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.934627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.935049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.935476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.935945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.900 [2024-04-17 12:53:17.936296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.900 passed 00:09:13.900 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-17 12:53:17.936882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7d4c, Actual=fd4c 00:09:13.900 [2024-04-17 12:53:17.937316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=7e21, Actual=fe21 00:09:13.900 [2024-04-17 12:53:17.937744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.938168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.938623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.939044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.939475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=f049 00:09:13.900 [2024-04-17 12:53:17.939839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d6f1 00:09:13.900 passed 00:09:13.900 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-17 12:53:17.940445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9ab753ed, Actual=1ab753ed 00:09:13.900 [2024-04-17 12:53:17.940878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b8574660, Actual=38574660 00:09:13.900 [2024-04-17 12:53:17.941349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.941778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.942214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.942636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058 00:09:13.900 [2024-04-17 12:53:17.943073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1f2fb48d 00:09:13.900 [2024-04-17 12:53:17.943433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=91b46f6b 00:09:13.900 [2024-04-17 12:53:17.943878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3 00:09:13.900 [2024-04-17 12:53:17.944308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2dc837a266, Actual=88010a2d4837a266 00:09:13.900 [2024-04-17 12:53:17.944756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.945191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:17.945622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.946066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058 00:09:13.900 [2024-04-17 12:53:17.946499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=669f53096654e685 00:09:13.900 [2024-04-17 12:53:17.946853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9643cc0d8b9c5529 00:09:13.900 passed 00:09:13.900 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:13.900 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:13.900 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:13.900 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:13.900 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-17 12:53:17.995263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=7d4c, Actual=fd4c 00:09:13.900 [2024-04-17 12:53:17.997366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d218, Actual=5218 00:09:13.900 [2024-04-17 12:53:17.999429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:18.001292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=8088 00:09:13.900 [2024-04-17 12:53:18.002420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.900 [2024-04-17 12:53:18.003539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a 00:09:13.900 [2024-04-17 12:53:18.004671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=a6e4 00:09:13.900 [2024-04-17 12:53:18.005803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=cea8 00:09:13.900 [2024-04-17 12:53:18.006933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5ab753ed, Actual=1ab753ed 00:09:13.900 [2024-04-17 12:53:18.008075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6af71270, Actual=2af71270 00:09:13.900 [2024-04-17 12:53:18.009228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:13.900 [2024-04-17 12:53:18.010383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:13.900 [2024-04-17 12:53:18.011512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000000000005f 00:09:13.901 [2024-04-17 12:53:18.012654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000000000005f 00:09:13.901 [2024-04-17 12:53:18.013794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=26b0c3eb 00:09:13.901 [2024-04-17 12:53:18.014940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=2b24fcba 00:09:13.901 [2024-04-17 12:53:18.016076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:13.901 [2024-04-17 12:53:18.017233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=dfcfa446f6071551, Actual=9fcfa446f6071551 00:09:13.901 [2024-04-17 12:53:18.018368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.019496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.020631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40000000005f 00:09:13.901 [2024-04-17 12:53:18.021771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40000000005f 00:09:13.901 [2024-04-17 12:53:18.022903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=936844dc56e04803 00:09:13.901 [2024-04-17 12:53:18.024063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=b1b97cc6ebc498b7 00:09:13.901 passed 00:09:13.901 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-17 12:53:18.024508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:09:13.901 [2024-04-17 12:53:18.024857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:09:13.901 [2024-04-17 12:53:18.025203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.025540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.025896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:09:13.901 [2024-04-17 12:53:18.026264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:09:13.901 [2024-04-17 12:53:18.026599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a6e4 00:09:13.901 [2024-04-17 12:53:18.026940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=5932 00:09:13.901 [2024-04-17 12:53:18.027275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:09:13.901 [2024-04-17 12:53:18.027619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:09:13.901 [2024-04-17 12:53:18.027983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.028336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.028677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:09:13.901 [2024-04-17 12:53:18.029035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:09:13.901 [2024-04-17 12:53:18.029374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=26b0c3eb 00:09:13.901 [2024-04-17 12:53:18.029716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=892e8cd 00:09:13.901 [2024-04-17 12:53:18.030072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:13.901 [2024-04-17 12:53:18.030411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:09:13.901 [2024-04-17 12:53:18.030756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.031094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:13.901 [2024-04-17 12:53:18.031438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000000059 00:09:13.901 [2024-04-17 12:53:18.031774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000000059 00:09:13.901 [2024-04-17 12:53:18.032145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=936844dc56e04803 00:09:13.901 [2024-04-17 12:53:18.032483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a4c6e7b5f2171772 00:09:13.901 passed 00:09:13.901 Test: dix_sec_512_md_0_error ...[2024-04-17 12:53:18.032783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:13.901 passed 00:09:13.901 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:09:13.901 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:13.901 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:14.163 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:14.163 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:14.163 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-17 12:53:18.068587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=bd4c, Actual=fd4c 00:09:14.163 [2024-04-17 12:53:18.069732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=c50b, Actual=850b 00:09:14.163 [2024-04-17 12:53:18.070862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.071992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.073139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000005f 00:09:14.163 [2024-04-17 12:53:18.074278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000005f 00:09:14.163 [2024-04-17 12:53:18.075394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=a6e4 00:09:14.163 [2024-04-17 12:53:18.076553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=cea8 00:09:14.163 [2024-04-17 12:53:18.077680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5ab753ed, Actual=1ab753ed 00:09:14.163 [2024-04-17 12:53:18.078812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6af71270, Actual=2af71270 00:09:14.163 [2024-04-17 12:53:18.079963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.081099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.082245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000000000005f 00:09:14.163 [2024-04-17 12:53:18.083378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000000000005f 00:09:14.163 [2024-04-17 12:53:18.084510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=26b0c3eb 00:09:14.163 [2024-04-17 12:53:18.085641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=2b24fcba 00:09:14.163 [2024-04-17 12:53:18.086781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:14.163 [2024-04-17 12:53:18.087928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=dfcfa446f6071551, Actual=9fcfa446f6071551 00:09:14.163 [2024-04-17 12:53:18.089063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.090184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.091312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40000000005f 00:09:14.163 [2024-04-17 12:53:18.092445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40000000005f 00:09:14.163 [2024-04-17 12:53:18.093592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=936844dc56e04803 00:09:14.163 [2024-04-17 12:53:18.094715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=b1b97cc6ebc498b7 00:09:14.163 passed 00:09:14.163 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-17 12:53:18.095186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=bd4c, Actual=fd4c 00:09:14.163 [2024-04-17 12:53:18.095524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5291, Actual=1291 00:09:14.163 [2024-04-17 12:53:18.095880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.096222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.096572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:09:14.163 [2024-04-17 12:53:18.096916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000059 00:09:14.163 [2024-04-17 12:53:18.097267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a6e4 00:09:14.163 [2024-04-17 12:53:18.097601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=5932 00:09:14.163 [2024-04-17 12:53:18.097939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5ab753ed, Actual=1ab753ed 00:09:14.163 [2024-04-17 12:53:18.098274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=49410607, Actual=9410607 00:09:14.163 [2024-04-17 12:53:18.098625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.098971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.099307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:09:14.163 [2024-04-17 12:53:18.099645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000000000059 00:09:14.163 [2024-04-17 12:53:18.099991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=26b0c3eb 00:09:14.163 [2024-04-17 12:53:18.100334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=892e8cd 00:09:14.163 [2024-04-17 12:53:18.100681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:09:14.163 [2024-04-17 12:53:18.101031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cab03f35efd49a94, Actual=8ab03f35efd49a94 00:09:14.163 [2024-04-17 12:53:18.101376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.101712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=4088 00:09:14.163 [2024-04-17 12:53:18.102042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000000059 00:09:14.163 [2024-04-17 12:53:18.102383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400000000059 00:09:14.163 [2024-04-17 12:53:18.102732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=936844dc56e04803 00:09:14.163 [2024-04-17 12:53:18.103090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=a4c6e7b5f2171772 00:09:14.163 passed 00:09:14.163 Test: set_md_interleave_iovs_test ...passed 00:09:14.163 Test: set_md_interleave_iovs_split_test ...passed 00:09:14.163 Test: dif_generate_stream_pi_16_test ...passed 00:09:14.163 Test: dif_generate_stream_test ...passed 00:09:14.163 Test: set_md_interleave_iovs_alignment_test ...[2024-04-17 12:53:18.109833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:14.163 passed 00:09:14.163 Test: dif_generate_split_test ...passed 00:09:14.163 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:14.163 Test: dif_verify_split_test ...passed 00:09:14.163 Test: dif_verify_stream_multi_segments_test ...passed 00:09:14.163 Test: update_crc32c_pi_16_test ...passed 00:09:14.163 Test: update_crc32c_test ...passed 00:09:14.163 Test: dif_update_crc32c_split_test ...passed 00:09:14.163 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:14.163 Test: get_range_with_md_test ...passed 00:09:14.163 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:14.163 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:14.163 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:14.163 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:14.163 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:14.163 Test: dif_generate_and_verify_unmap_test ...passed 00:09:14.163 00:09:14.163 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.163 suites 1 1 n/a 0 0 00:09:14.163 tests 79 79 79 0 0 00:09:14.163 asserts 3584 3584 3584 0 n/a 00:09:14.163 00:09:14.163 Elapsed time = 0.335 seconds 00:09:14.163 12:53:18 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:14.163 00:09:14.163 00:09:14.163 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.163 http://cunit.sourceforge.net/ 00:09:14.163 00:09:14.163 00:09:14.163 Suite: iov 00:09:14.163 Test: test_single_iov ...passed 00:09:14.163 Test: test_simple_iov ...passed 00:09:14.163 Test: test_complex_iov ...passed 00:09:14.163 Test: test_iovs_to_buf ...passed 00:09:14.163 Test: test_buf_to_iovs ...passed 00:09:14.163 Test: test_memset ...passed 00:09:14.163 Test: test_iov_one ...passed 00:09:14.163 Test: test_iov_xfer ...passed 00:09:14.163 00:09:14.163 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.163 suites 1 1 n/a 0 0 00:09:14.163 tests 8 8 8 0 0 00:09:14.163 asserts 156 156 156 0 n/a 00:09:14.163 00:09:14.163 Elapsed time = 0.000 seconds 00:09:14.163 12:53:18 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:14.163 00:09:14.163 00:09:14.163 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.163 http://cunit.sourceforge.net/ 00:09:14.163 00:09:14.163 00:09:14.163 Suite: math 00:09:14.163 Test: test_serial_number_arithmetic ...passed 00:09:14.163 Suite: erase 00:09:14.164 Test: test_memset_s ...passed 00:09:14.164 00:09:14.164 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.164 suites 2 2 n/a 0 0 00:09:14.164 tests 2 2 2 0 0 00:09:14.164 asserts 18 18 18 0 n/a 00:09:14.164 00:09:14.164 Elapsed time = 0.000 seconds 00:09:14.164 12:53:18 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:14.164 00:09:14.164 00:09:14.164 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.164 http://cunit.sourceforge.net/ 00:09:14.164 00:09:14.164 00:09:14.164 Suite: pipe 00:09:14.164 Test: test_create_destroy ...passed 00:09:14.164 Test: test_write_get_buffer ...passed 00:09:14.164 Test: test_write_advance ...passed 00:09:14.164 Test: test_read_get_buffer ...passed 00:09:14.164 Test: test_read_advance ...passed 00:09:14.164 Test: test_data ...passed 00:09:14.164 00:09:14.164 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.164 suites 1 1 n/a 0 0 00:09:14.164 tests 6 6 6 0 0 00:09:14.164 asserts 251 251 251 0 n/a 00:09:14.164 00:09:14.164 Elapsed time = 0.000 seconds 00:09:14.164 12:53:18 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:14.164 00:09:14.164 00:09:14.164 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.164 http://cunit.sourceforge.net/ 00:09:14.164 00:09:14.164 00:09:14.164 Suite: xor 00:09:14.164 Test: test_xor_gen ...passed 00:09:14.164 00:09:14.164 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.164 suites 1 1 n/a 0 0 00:09:14.164 tests 1 1 1 0 0 00:09:14.164 asserts 17 17 17 0 n/a 00:09:14.164 00:09:14.164 Elapsed time = 0.005 seconds 00:09:14.164 00:09:14.164 real 0m0.753s 00:09:14.164 user 0m0.562s 00:09:14.164 sys 0m0.146s 00:09:14.164 12:53:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:14.164 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.164 ************************************ 00:09:14.164 END TEST unittest_util 00:09:14.164 ************************************ 00:09:14.423 12:53:18 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:14.423 12:53:18 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:14.423 12:53:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:14.423 12:53:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:14.423 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.423 ************************************ 00:09:14.423 START TEST unittest_vhost 00:09:14.423 ************************************ 00:09:14.423 12:53:18 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:14.423 00:09:14.423 00:09:14.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.423 http://cunit.sourceforge.net/ 00:09:14.423 00:09:14.423 00:09:14.423 Suite: vhost_suite 00:09:14.423 Test: desc_to_iov_test ...[2024-04-17 12:53:18.369930] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:14.423 passed 00:09:14.423 Test: create_controller_test ...[2024-04-17 12:53:18.375017] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:14.423 [2024-04-17 12:53:18.375340] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:14.423 [2024-04-17 12:53:18.375759] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:14.423 [2024-04-17 12:53:18.376039] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:14.423 [2024-04-17 12:53:18.376375] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:14.423 [2024-04-17 12:53:18.376662] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1782:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-17 12:53:18.378482] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:14.423 passed 00:09:14.423 Test: session_find_by_vid_test ...passed 00:09:14.423 Test: remove_controller_test ...[2024-04-17 12:53:18.382404] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1867:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:14.423 passed 00:09:14.423 Test: vq_avail_ring_get_test ...passed 00:09:14.423 Test: vq_packed_ring_test ...passed 00:09:14.423 Test: vhost_blk_construct_test ...passed 00:09:14.423 00:09:14.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.423 suites 1 1 n/a 0 0 00:09:14.423 tests 7 7 7 0 0 00:09:14.423 asserts 147 147 147 0 n/a 00:09:14.423 00:09:14.423 Elapsed time = 0.015 seconds 00:09:14.423 ************************************ 00:09:14.423 END TEST unittest_vhost 00:09:14.423 ************************************ 00:09:14.423 00:09:14.423 real 0m0.059s 00:09:14.423 user 0m0.035s 00:09:14.423 sys 0m0.020s 00:09:14.423 12:53:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:14.423 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.423 12:53:18 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:14.423 12:53:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:14.423 12:53:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:14.423 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.423 ************************************ 00:09:14.423 START TEST unittest_dma 00:09:14.423 ************************************ 00:09:14.423 12:53:18 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:14.423 00:09:14.423 00:09:14.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.423 http://cunit.sourceforge.net/ 00:09:14.423 00:09:14.423 00:09:14.423 Suite: dma_suite 00:09:14.423 Test: test_dma ...[2024-04-17 12:53:18.485041] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:14.423 passed 00:09:14.423 00:09:14.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.423 suites 1 1 n/a 0 0 00:09:14.423 tests 1 1 1 0 0 00:09:14.423 asserts 54 54 54 0 n/a 00:09:14.423 00:09:14.423 Elapsed time = 0.001 seconds 00:09:14.423 ************************************ 00:09:14.423 END TEST unittest_dma 00:09:14.423 ************************************ 00:09:14.423 00:09:14.423 real 0m0.026s 00:09:14.423 user 0m0.013s 00:09:14.423 sys 0m0.012s 00:09:14.423 12:53:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:14.423 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.423 12:53:18 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:09:14.423 12:53:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:14.423 12:53:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:14.423 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.682 ************************************ 00:09:14.682 START TEST unittest_init 00:09:14.682 ************************************ 00:09:14.682 12:53:18 -- common/autotest_common.sh@1099 -- # unittest_init 00:09:14.682 12:53:18 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:14.682 00:09:14.682 00:09:14.682 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.682 http://cunit.sourceforge.net/ 00:09:14.682 00:09:14.682 00:09:14.682 Suite: subsystem_suite 00:09:14.682 Test: subsystem_sort_test_depends_on_single ...passed 00:09:14.682 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:14.682 Test: subsystem_sort_test_missing_dependency ...[2024-04-17 12:53:18.589307] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:14.682 [2024-04-17 12:53:18.589729] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:14.682 passed 00:09:14.682 00:09:14.682 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.682 suites 1 1 n/a 0 0 00:09:14.682 tests 3 3 3 0 0 00:09:14.682 asserts 20 20 20 0 n/a 00:09:14.682 00:09:14.682 Elapsed time = 0.001 seconds 00:09:14.682 ************************************ 00:09:14.682 END TEST unittest_init 00:09:14.682 ************************************ 00:09:14.682 00:09:14.682 real 0m0.036s 00:09:14.682 user 0m0.024s 00:09:14.682 sys 0m0.008s 00:09:14.682 12:53:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:14.682 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.682 12:53:18 -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:14.682 12:53:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:14.682 12:53:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:14.682 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.682 ************************************ 00:09:14.682 START TEST unittest_keyring 00:09:14.682 ************************************ 00:09:14.682 12:53:18 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:14.682 00:09:14.682 00:09:14.682 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.682 http://cunit.sourceforge.net/ 00:09:14.682 00:09:14.682 00:09:14.682 Suite: keyring 00:09:14.682 Test: test_keyring_add_remove ...[2024-04-17 12:53:18.685586] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:09:14.682 [2024-04-17 12:53:18.685991] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:09:14.682 [2024-04-17 12:53:18.686246] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:09:14.682 passed 00:09:14.682 Test: test_keyring_get_put ...passed 00:09:14.682 00:09:14.682 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.682 suites 1 1 n/a 0 0 00:09:14.682 tests 2 2 2 0 0 00:09:14.682 asserts 44 44 44 0 n/a 00:09:14.682 00:09:14.682 Elapsed time = 0.001 seconds 00:09:14.682 ************************************ 00:09:14.682 END TEST unittest_keyring 00:09:14.682 ************************************ 00:09:14.682 00:09:14.682 real 0m0.031s 00:09:14.682 user 0m0.008s 00:09:14.682 sys 0m0.022s 00:09:14.683 12:53:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:09:14.683 12:53:18 -- common/autotest_common.sh@10 -- # set +x 00:09:14.683 12:53:18 -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:09:14.683 12:53:18 -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:14.683 12:53:18 -- unit/unittest.sh@291 -- # hostname 00:09:14.683 12:53:18 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:14.941 geninfo: WARNING: invalid characters removed from testname! 00:09:47.031 12:53:46 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:47.598 12:53:51 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:50.897 12:53:54 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:54.184 12:53:57 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:56.717 12:54:00 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:00.002 12:54:03 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:02.534 12:54:06 -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:05.065 12:54:08 -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:05.066 12:54:08 -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:05.324 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:05.324 Found 317 entries. 00:10:05.324 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:05.324 Writing .css and .png files. 00:10:05.324 Generating output. 00:10:05.324 Processing file include/linux/virtio_ring.h 00:10:05.583 Processing file include/spdk/nvme_spec.h 00:10:05.583 Processing file include/spdk/endian.h 00:10:05.583 Processing file include/spdk/histogram_data.h 00:10:05.583 Processing file include/spdk/nvmf_transport.h 00:10:05.583 Processing file include/spdk/base64.h 00:10:05.583 Processing file include/spdk/mmio.h 00:10:05.583 Processing file include/spdk/nvme.h 00:10:05.583 Processing file include/spdk/util.h 00:10:05.583 Processing file include/spdk/bdev_module.h 00:10:05.583 Processing file include/spdk/trace.h 00:10:05.583 Processing file include/spdk/thread.h 00:10:05.842 Processing file include/spdk_internal/nvme_tcp.h 00:10:05.842 Processing file include/spdk_internal/sock.h 00:10:05.842 Processing file include/spdk_internal/rdma.h 00:10:05.842 Processing file include/spdk_internal/virtio.h 00:10:05.842 Processing file include/spdk_internal/utf.h 00:10:05.842 Processing file include/spdk_internal/sgl.h 00:10:06.101 Processing file lib/accel/accel_sw.c 00:10:06.101 Processing file lib/accel/accel_rpc.c 00:10:06.101 Processing file lib/accel/accel.c 00:10:06.360 Processing file lib/bdev/bdev_zone.c 00:10:06.360 Processing file lib/bdev/part.c 00:10:06.360 Processing file lib/bdev/scsi_nvme.c 00:10:06.360 Processing file lib/bdev/bdev.c 00:10:06.360 Processing file lib/bdev/bdev_rpc.c 00:10:06.618 Processing file lib/blob/blob_bs_dev.c 00:10:06.618 Processing file lib/blob/request.c 00:10:06.618 Processing file lib/blob/blobstore.h 00:10:06.618 Processing file lib/blob/zeroes.c 00:10:06.619 Processing file lib/blob/blobstore.c 00:10:06.619 Processing file lib/blobfs/blobfs.c 00:10:06.619 Processing file lib/blobfs/tree.c 00:10:06.877 Processing file lib/conf/conf.c 00:10:06.877 Processing file lib/dma/dma.c 00:10:07.136 Processing file lib/env_dpdk/init.c 00:10:07.136 Processing file lib/env_dpdk/pci_ioat.c 00:10:07.136 Processing file lib/env_dpdk/pci_event.c 00:10:07.136 Processing file lib/env_dpdk/pci_virtio.c 00:10:07.136 Processing file lib/env_dpdk/pci_dpdk.c 00:10:07.136 Processing file lib/env_dpdk/pci.c 00:10:07.136 Processing file lib/env_dpdk/pci_vmd.c 00:10:07.136 Processing file lib/env_dpdk/threads.c 00:10:07.136 Processing file lib/env_dpdk/memory.c 00:10:07.136 Processing file lib/env_dpdk/pci_idxd.c 00:10:07.136 Processing file lib/env_dpdk/env.c 00:10:07.136 Processing file lib/env_dpdk/sigbus_handler.c 00:10:07.136 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:07.136 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:07.136 Processing file lib/event/log_rpc.c 00:10:07.136 Processing file lib/event/app_rpc.c 00:10:07.136 Processing file lib/event/scheduler_static.c 00:10:07.136 Processing file lib/event/app.c 00:10:07.136 Processing file lib/event/reactor.c 00:10:07.703 Processing file lib/ftl/ftl_band.c 00:10:07.703 Processing file lib/ftl/ftl_writer.c 00:10:07.703 Processing file lib/ftl/ftl_writer.h 00:10:07.703 Processing file lib/ftl/ftl_l2p_cache.c 00:10:07.703 Processing file lib/ftl/ftl_core.h 00:10:07.703 Processing file lib/ftl/ftl_io.h 00:10:07.703 Processing file lib/ftl/ftl_band_ops.c 00:10:07.703 Processing file lib/ftl/ftl_p2l.c 00:10:07.704 Processing file lib/ftl/ftl_rq.c 00:10:07.704 Processing file lib/ftl/ftl_l2p.c 00:10:07.704 Processing file lib/ftl/ftl_debug.c 00:10:07.704 Processing file lib/ftl/ftl_init.c 00:10:07.704 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:07.704 Processing file lib/ftl/ftl_io.c 00:10:07.704 Processing file lib/ftl/ftl_nv_cache.h 00:10:07.704 Processing file lib/ftl/ftl_l2p_flat.c 00:10:07.704 Processing file lib/ftl/ftl_sb.c 00:10:07.704 Processing file lib/ftl/ftl_debug.h 00:10:07.704 Processing file lib/ftl/ftl_band.h 00:10:07.704 Processing file lib/ftl/ftl_layout.c 00:10:07.704 Processing file lib/ftl/ftl_trace.c 00:10:07.704 Processing file lib/ftl/ftl_reloc.c 00:10:07.704 Processing file lib/ftl/ftl_nv_cache.c 00:10:07.704 Processing file lib/ftl/ftl_core.c 00:10:07.704 Processing file lib/ftl/base/ftl_base_dev.c 00:10:07.704 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:07.963 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:07.963 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:07.963 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:08.221 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:08.221 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:08.221 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:08.221 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:08.479 Processing file lib/ftl/utils/ftl_df.h 00:10:08.479 Processing file lib/ftl/utils/ftl_property.h 00:10:08.479 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:08.479 Processing file lib/ftl/utils/ftl_property.c 00:10:08.479 Processing file lib/ftl/utils/ftl_md.c 00:10:08.479 Processing file lib/ftl/utils/ftl_mempool.c 00:10:08.479 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:08.479 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:08.479 Processing file lib/ftl/utils/ftl_conf.c 00:10:08.479 Processing file lib/idxd/idxd.c 00:10:08.479 Processing file lib/idxd/idxd_internal.h 00:10:08.479 Processing file lib/idxd/idxd_user.c 00:10:08.479 Processing file lib/init/subsystem_rpc.c 00:10:08.479 Processing file lib/init/subsystem.c 00:10:08.479 Processing file lib/init/rpc.c 00:10:08.479 Processing file lib/init/json_config.c 00:10:08.738 Processing file lib/ioat/ioat_internal.h 00:10:08.738 Processing file lib/ioat/ioat.c 00:10:08.997 Processing file lib/iscsi/portal_grp.c 00:10:08.997 Processing file lib/iscsi/task.h 00:10:08.997 Processing file lib/iscsi/iscsi.c 00:10:08.997 Processing file lib/iscsi/iscsi_rpc.c 00:10:08.997 Processing file lib/iscsi/task.c 00:10:08.997 Processing file lib/iscsi/init_grp.c 00:10:08.997 Processing file lib/iscsi/md5.c 00:10:08.997 Processing file lib/iscsi/tgt_node.c 00:10:08.997 Processing file lib/iscsi/param.c 00:10:08.997 Processing file lib/iscsi/iscsi.h 00:10:08.997 Processing file lib/iscsi/conn.c 00:10:08.997 Processing file lib/iscsi/iscsi_subsystem.c 00:10:08.997 Processing file lib/json/json_parse.c 00:10:08.997 Processing file lib/json/json_write.c 00:10:08.997 Processing file lib/json/json_util.c 00:10:09.256 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:09.256 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:09.256 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:09.256 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:09.256 Processing file lib/keyring/keyring.c 00:10:09.256 Processing file lib/keyring/keyring_rpc.c 00:10:09.256 Processing file lib/log/log_flags.c 00:10:09.256 Processing file lib/log/log_deprecated.c 00:10:09.256 Processing file lib/log/log.c 00:10:09.256 Processing file lib/lvol/lvol.c 00:10:09.514 Processing file lib/nbd/nbd_rpc.c 00:10:09.514 Processing file lib/nbd/nbd.c 00:10:09.514 Processing file lib/notify/notify.c 00:10:09.514 Processing file lib/notify/notify_rpc.c 00:10:10.111 Processing file lib/nvme/nvme_ctrlr.c 00:10:10.111 Processing file lib/nvme/nvme_pcie_common.c 00:10:10.111 Processing file lib/nvme/nvme_tcp.c 00:10:10.111 Processing file lib/nvme/nvme_transport.c 00:10:10.111 Processing file lib/nvme/nvme_ns.c 00:10:10.111 Processing file lib/nvme/nvme_poll_group.c 00:10:10.111 Processing file lib/nvme/nvme_quirks.c 00:10:10.111 Processing file lib/nvme/nvme_auth.c 00:10:10.111 Processing file lib/nvme/nvme_io_msg.c 00:10:10.111 Processing file lib/nvme/nvme_discovery.c 00:10:10.111 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:10.111 Processing file lib/nvme/nvme_zns.c 00:10:10.111 Processing file lib/nvme/nvme_fabric.c 00:10:10.111 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:10.111 Processing file lib/nvme/nvme_qpair.c 00:10:10.111 Processing file lib/nvme/nvme_pcie_internal.h 00:10:10.111 Processing file lib/nvme/nvme_ns_cmd.c 00:10:10.111 Processing file lib/nvme/nvme_internal.h 00:10:10.111 Processing file lib/nvme/nvme_stubs.c 00:10:10.111 Processing file lib/nvme/nvme_pcie.c 00:10:10.111 Processing file lib/nvme/nvme_opal.c 00:10:10.111 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:10.111 Processing file lib/nvme/nvme_cuse.c 00:10:10.111 Processing file lib/nvme/nvme_rdma.c 00:10:10.111 Processing file lib/nvme/nvme.c 00:10:10.677 Processing file lib/nvmf/transport.c 00:10:10.677 Processing file lib/nvmf/tcp.c 00:10:10.677 Processing file lib/nvmf/ctrlr.c 00:10:10.677 Processing file lib/nvmf/ctrlr_discovery.c 00:10:10.677 Processing file lib/nvmf/ctrlr_bdev.c 00:10:10.677 Processing file lib/nvmf/nvmf_rpc.c 00:10:10.677 Processing file lib/nvmf/nvmf.c 00:10:10.677 Processing file lib/nvmf/rdma.c 00:10:10.677 Processing file lib/nvmf/subsystem.c 00:10:10.677 Processing file lib/nvmf/nvmf_internal.h 00:10:10.677 Processing file lib/rdma/rdma_verbs.c 00:10:10.677 Processing file lib/rdma/common.c 00:10:10.677 Processing file lib/rpc/rpc.c 00:10:10.935 Processing file lib/scsi/lun.c 00:10:10.935 Processing file lib/scsi/dev.c 00:10:10.935 Processing file lib/scsi/scsi_bdev.c 00:10:10.935 Processing file lib/scsi/task.c 00:10:10.935 Processing file lib/scsi/scsi_pr.c 00:10:10.935 Processing file lib/scsi/scsi_rpc.c 00:10:10.935 Processing file lib/scsi/port.c 00:10:10.935 Processing file lib/scsi/scsi.c 00:10:10.935 Processing file lib/sock/sock.c 00:10:10.935 Processing file lib/sock/sock_rpc.c 00:10:10.935 Processing file lib/thread/thread.c 00:10:10.935 Processing file lib/thread/iobuf.c 00:10:11.194 Processing file lib/trace/trace_flags.c 00:10:11.194 Processing file lib/trace/trace_rpc.c 00:10:11.194 Processing file lib/trace/trace.c 00:10:11.194 Processing file lib/trace_parser/trace.cpp 00:10:11.194 Processing file lib/ut/ut.c 00:10:11.194 Processing file lib/ut_mock/mock.c 00:10:11.761 Processing file lib/util/fd.c 00:10:11.761 Processing file lib/util/crc32c.c 00:10:11.761 Processing file lib/util/strerror_tls.c 00:10:11.761 Processing file lib/util/file.c 00:10:11.761 Processing file lib/util/hexlify.c 00:10:11.761 Processing file lib/util/zipf.c 00:10:11.761 Processing file lib/util/pipe.c 00:10:11.761 Processing file lib/util/crc32.c 00:10:11.761 Processing file lib/util/base64.c 00:10:11.761 Processing file lib/util/iov.c 00:10:11.761 Processing file lib/util/crc64.c 00:10:11.761 Processing file lib/util/uuid.c 00:10:11.761 Processing file lib/util/fd_group.c 00:10:11.761 Processing file lib/util/xor.c 00:10:11.761 Processing file lib/util/cpuset.c 00:10:11.761 Processing file lib/util/string.c 00:10:11.761 Processing file lib/util/math.c 00:10:11.761 Processing file lib/util/crc16.c 00:10:11.761 Processing file lib/util/dif.c 00:10:11.761 Processing file lib/util/bit_array.c 00:10:11.761 Processing file lib/util/crc32_ieee.c 00:10:11.761 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:11.761 Processing file lib/vfio_user/host/vfio_user.c 00:10:12.020 Processing file lib/vhost/vhost_scsi.c 00:10:12.020 Processing file lib/vhost/vhost.c 00:10:12.020 Processing file lib/vhost/vhost_rpc.c 00:10:12.020 Processing file lib/vhost/vhost_blk.c 00:10:12.020 Processing file lib/vhost/vhost_internal.h 00:10:12.020 Processing file lib/vhost/rte_vhost_user.c 00:10:12.020 Processing file lib/virtio/virtio_vfio_user.c 00:10:12.020 Processing file lib/virtio/virtio_vhost_user.c 00:10:12.020 Processing file lib/virtio/virtio_pci.c 00:10:12.020 Processing file lib/virtio/virtio.c 00:10:12.278 Processing file lib/vmd/vmd.c 00:10:12.278 Processing file lib/vmd/led.c 00:10:12.278 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:12.278 Processing file module/accel/dsa/accel_dsa.c 00:10:12.278 Processing file module/accel/error/accel_error.c 00:10:12.278 Processing file module/accel/error/accel_error_rpc.c 00:10:12.278 Processing file module/accel/iaa/accel_iaa.c 00:10:12.278 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:12.537 Processing file module/accel/ioat/accel_ioat.c 00:10:12.537 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:12.537 Processing file module/bdev/aio/bdev_aio.c 00:10:12.537 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:12.537 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:12.537 Processing file module/bdev/delay/vbdev_delay.c 00:10:12.796 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:12.796 Processing file module/bdev/error/vbdev_error.c 00:10:12.796 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:12.796 Processing file module/bdev/ftl/bdev_ftl.c 00:10:12.796 Processing file module/bdev/gpt/gpt.h 00:10:12.796 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:12.796 Processing file module/bdev/gpt/gpt.c 00:10:13.101 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:13.101 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:13.101 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:13.101 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:13.101 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:13.101 Processing file module/bdev/malloc/bdev_malloc.c 00:10:13.101 Processing file module/bdev/null/bdev_null_rpc.c 00:10:13.101 Processing file module/bdev/null/bdev_null.c 00:10:13.669 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:13.669 Processing file module/bdev/nvme/vbdev_opal.c 00:10:13.669 Processing file module/bdev/nvme/nvme_rpc.c 00:10:13.669 Processing file module/bdev/nvme/bdev_nvme.c 00:10:13.669 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:13.669 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:13.669 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:13.669 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:13.669 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:13.927 Processing file module/bdev/raid/raid0.c 00:10:13.927 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:13.927 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:13.927 Processing file module/bdev/raid/bdev_raid.h 00:10:13.927 Processing file module/bdev/raid/raid5f.c 00:10:13.927 Processing file module/bdev/raid/bdev_raid.c 00:10:13.927 Processing file module/bdev/raid/concat.c 00:10:13.927 Processing file module/bdev/raid/raid1.c 00:10:13.927 Processing file module/bdev/split/vbdev_split.c 00:10:13.927 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:14.185 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:14.186 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:14.186 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:14.186 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:14.186 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:14.186 Processing file module/blob/bdev/blob_bdev.c 00:10:14.444 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:14.444 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:14.444 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:14.444 Processing file module/event/subsystems/accel/accel.c 00:10:14.444 Processing file module/event/subsystems/bdev/bdev.c 00:10:14.702 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:14.702 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:14.702 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:14.702 Processing file module/event/subsystems/keyring/keyring.c 00:10:14.702 Processing file module/event/subsystems/nbd/nbd.c 00:10:14.960 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:14.960 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:14.960 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:14.960 Processing file module/event/subsystems/scsi/scsi.c 00:10:15.218 Processing file module/event/subsystems/sock/sock.c 00:10:15.218 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:15.218 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:15.476 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:15.476 Processing file module/event/subsystems/vmd/vmd.c 00:10:15.476 Processing file module/keyring/file/keyring_rpc.c 00:10:15.476 Processing file module/keyring/file/keyring.c 00:10:15.476 Processing file module/keyring/linux/keyring.c 00:10:15.476 Processing file module/keyring/linux/keyring_rpc.c 00:10:15.476 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:15.734 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:15.734 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:15.734 Processing file module/sock/sock_kernel.h 00:10:15.993 Processing file module/sock/posix/posix.c 00:10:15.993 Writing directory view page. 00:10:15.993 Overall coverage rate: 00:10:15.993 lines......: 39.2% (39860 of 101755 lines) 00:10:15.993 functions..: 42.7% (3647 of 8531 functions) 00:10:15.993 00:10:15.993 00:10:15.993 ===================== 00:10:15.993 All unit tests passed 00:10:15.993 ===================== 00:10:15.993 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:15.993 12:54:19 -- unit/unittest.sh@303 -- # set +x 00:10:15.993 00:10:15.993 00:10:15.993 ************************************ 00:10:15.993 END TEST unittest 00:10:15.993 ************************************ 00:10:15.993 00:10:15.993 real 3m26.150s 00:10:15.993 user 2m59.064s 00:10:15.993 sys 0m16.266s 00:10:15.993 12:54:19 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:15.994 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:15.994 12:54:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:10:15.994 12:54:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:15.994 12:54:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:15.994 12:54:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:10:15.994 12:54:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:15.994 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:15.994 12:54:19 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:15.994 12:54:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:15.994 12:54:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:15.994 12:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:15.994 ************************************ 00:10:15.994 START TEST env 00:10:15.994 ************************************ 00:10:15.994 12:54:19 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:15.994 * Looking for test storage... 00:10:15.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:15.994 12:54:20 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:15.994 12:54:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:15.994 12:54:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:15.994 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:10:15.994 ************************************ 00:10:15.994 START TEST env_memory 00:10:15.994 ************************************ 00:10:15.994 12:54:20 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:15.994 00:10:15.994 00:10:15.994 CUnit - A unit testing framework for C - Version 2.1-3 00:10:15.994 http://cunit.sourceforge.net/ 00:10:15.994 00:10:15.994 00:10:15.994 Suite: memory 00:10:16.263 Test: alloc and free memory map ...[2024-04-17 12:54:20.157011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:16.263 passed 00:10:16.263 Test: mem map translation ...[2024-04-17 12:54:20.203939] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:16.263 [2024-04-17 12:54:20.204091] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:16.263 [2024-04-17 12:54:20.204209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:16.263 [2024-04-17 12:54:20.204302] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:16.263 passed 00:10:16.263 Test: mem map registration ...[2024-04-17 12:54:20.288191] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:16.263 [2024-04-17 12:54:20.288331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:16.263 passed 00:10:16.537 Test: mem map adjacent registrations ...passed 00:10:16.537 00:10:16.537 Run Summary: Type Total Ran Passed Failed Inactive 00:10:16.537 suites 1 1 n/a 0 0 00:10:16.537 tests 4 4 4 0 0 00:10:16.537 asserts 152 152 152 0 n/a 00:10:16.537 00:10:16.537 Elapsed time = 0.288 seconds 00:10:16.537 ************************************ 00:10:16.537 END TEST env_memory 00:10:16.537 ************************************ 00:10:16.537 00:10:16.537 real 0m0.315s 00:10:16.537 user 0m0.290s 00:10:16.537 sys 0m0.025s 00:10:16.538 12:54:20 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:16.538 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:10:16.538 12:54:20 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:16.538 12:54:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:16.538 12:54:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:16.538 12:54:20 -- common/autotest_common.sh@10 -- # set +x 00:10:16.538 ************************************ 00:10:16.538 START TEST env_vtophys 00:10:16.538 ************************************ 00:10:16.538 12:54:20 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:16.538 EAL: lib.eal log level changed from notice to debug 00:10:16.538 EAL: Detected lcore 0 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 1 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 2 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 3 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 4 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 5 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 6 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 7 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 8 as core 0 on socket 0 00:10:16.538 EAL: Detected lcore 9 as core 0 on socket 0 00:10:16.538 EAL: Maximum logical cores by configuration: 128 00:10:16.538 EAL: Detected CPU lcores: 10 00:10:16.538 EAL: Detected NUMA nodes: 1 00:10:16.538 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:10:16.538 EAL: Checking presence of .so 'librte_eal.so.24' 00:10:16.538 EAL: Checking presence of .so 'librte_eal.so' 00:10:16.538 EAL: Detected static linkage of DPDK 00:10:16.538 EAL: No shared files mode enabled, IPC will be disabled 00:10:16.538 EAL: Selected IOVA mode 'PA' 00:10:16.538 EAL: Probing VFIO support... 00:10:16.538 EAL: IOMMU type 1 (Type 1) is supported 00:10:16.538 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:16.538 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:16.538 EAL: VFIO support initialized 00:10:16.538 EAL: Ask a virtual area of 0x2e000 bytes 00:10:16.538 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:16.538 EAL: Setting up physically contiguous memory... 00:10:16.538 EAL: Setting maximum number of open files to 1048576 00:10:16.538 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:16.538 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:16.538 EAL: Ask a virtual area of 0x61000 bytes 00:10:16.538 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:16.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:16.538 EAL: Ask a virtual area of 0x400000000 bytes 00:10:16.538 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:16.538 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:16.538 EAL: Ask a virtual area of 0x61000 bytes 00:10:16.538 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:16.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:16.538 EAL: Ask a virtual area of 0x400000000 bytes 00:10:16.538 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:16.538 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:16.538 EAL: Ask a virtual area of 0x61000 bytes 00:10:16.538 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:16.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:16.538 EAL: Ask a virtual area of 0x400000000 bytes 00:10:16.538 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:16.538 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:16.538 EAL: Ask a virtual area of 0x61000 bytes 00:10:16.538 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:16.538 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:16.538 EAL: Ask a virtual area of 0x400000000 bytes 00:10:16.538 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:16.538 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:16.538 EAL: Hugepages will be freed exactly as allocated. 00:10:16.538 EAL: No shared files mode enabled, IPC is disabled 00:10:16.538 EAL: No shared files mode enabled, IPC is disabled 00:10:16.796 EAL: TSC frequency is ~2200000 KHz 00:10:16.796 EAL: Main lcore 0 is ready (tid=7fde483eaa40;cpuset=[0]) 00:10:16.796 EAL: Trying to obtain current memory policy. 00:10:16.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:16.796 EAL: Restoring previous memory policy: 0 00:10:16.796 EAL: request: mp_malloc_sync 00:10:16.796 EAL: No shared files mode enabled, IPC is disabled 00:10:16.796 EAL: Heap on socket 0 was expanded by 2MB 00:10:16.796 EAL: No shared files mode enabled, IPC is disabled 00:10:16.796 EAL: Mem event callback 'spdk:(nil)' registered 00:10:16.796 00:10:16.796 00:10:16.796 CUnit - A unit testing framework for C - Version 2.1-3 00:10:16.797 http://cunit.sourceforge.net/ 00:10:16.797 00:10:16.797 00:10:16.797 Suite: components_suite 00:10:17.055 Test: vtophys_malloc_test ...passed 00:10:17.055 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:17.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.055 EAL: Restoring previous memory policy: 0 00:10:17.055 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.055 EAL: request: mp_malloc_sync 00:10:17.055 EAL: No shared files mode enabled, IPC is disabled 00:10:17.055 EAL: Heap on socket 0 was expanded by 4MB 00:10:17.055 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.055 EAL: request: mp_malloc_sync 00:10:17.055 EAL: No shared files mode enabled, IPC is disabled 00:10:17.055 EAL: Heap on socket 0 was shrunk by 4MB 00:10:17.055 EAL: Trying to obtain current memory policy. 00:10:17.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.055 EAL: Restoring previous memory policy: 0 00:10:17.055 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.055 EAL: request: mp_malloc_sync 00:10:17.055 EAL: No shared files mode enabled, IPC is disabled 00:10:17.055 EAL: Heap on socket 0 was expanded by 6MB 00:10:17.055 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.055 EAL: request: mp_malloc_sync 00:10:17.056 EAL: No shared files mode enabled, IPC is disabled 00:10:17.056 EAL: Heap on socket 0 was shrunk by 6MB 00:10:17.056 EAL: Trying to obtain current memory policy. 00:10:17.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.056 EAL: Restoring previous memory policy: 0 00:10:17.056 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.056 EAL: request: mp_malloc_sync 00:10:17.056 EAL: No shared files mode enabled, IPC is disabled 00:10:17.056 EAL: Heap on socket 0 was expanded by 10MB 00:10:17.056 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.056 EAL: request: mp_malloc_sync 00:10:17.056 EAL: No shared files mode enabled, IPC is disabled 00:10:17.056 EAL: Heap on socket 0 was shrunk by 10MB 00:10:17.056 EAL: Trying to obtain current memory policy. 00:10:17.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.056 EAL: Restoring previous memory policy: 0 00:10:17.056 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.056 EAL: request: mp_malloc_sync 00:10:17.056 EAL: No shared files mode enabled, IPC is disabled 00:10:17.056 EAL: Heap on socket 0 was expanded by 18MB 00:10:17.314 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.314 EAL: request: mp_malloc_sync 00:10:17.314 EAL: No shared files mode enabled, IPC is disabled 00:10:17.314 EAL: Heap on socket 0 was shrunk by 18MB 00:10:17.314 EAL: Trying to obtain current memory policy. 00:10:17.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.314 EAL: Restoring previous memory policy: 0 00:10:17.314 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.314 EAL: request: mp_malloc_sync 00:10:17.314 EAL: No shared files mode enabled, IPC is disabled 00:10:17.314 EAL: Heap on socket 0 was expanded by 34MB 00:10:17.314 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.314 EAL: request: mp_malloc_sync 00:10:17.314 EAL: No shared files mode enabled, IPC is disabled 00:10:17.314 EAL: Heap on socket 0 was shrunk by 34MB 00:10:17.314 EAL: Trying to obtain current memory policy. 00:10:17.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.314 EAL: Restoring previous memory policy: 0 00:10:17.314 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.314 EAL: request: mp_malloc_sync 00:10:17.314 EAL: No shared files mode enabled, IPC is disabled 00:10:17.314 EAL: Heap on socket 0 was expanded by 66MB 00:10:17.572 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.572 EAL: request: mp_malloc_sync 00:10:17.572 EAL: No shared files mode enabled, IPC is disabled 00:10:17.572 EAL: Heap on socket 0 was shrunk by 66MB 00:10:17.572 EAL: Trying to obtain current memory policy. 00:10:17.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:17.572 EAL: Restoring previous memory policy: 0 00:10:17.572 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.573 EAL: request: mp_malloc_sync 00:10:17.573 EAL: No shared files mode enabled, IPC is disabled 00:10:17.573 EAL: Heap on socket 0 was expanded by 130MB 00:10:17.831 EAL: Calling mem event callback 'spdk:(nil)' 00:10:17.831 EAL: request: mp_malloc_sync 00:10:17.831 EAL: No shared files mode enabled, IPC is disabled 00:10:17.831 EAL: Heap on socket 0 was shrunk by 130MB 00:10:18.089 EAL: Trying to obtain current memory policy. 00:10:18.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:18.090 EAL: Restoring previous memory policy: 0 00:10:18.090 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.090 EAL: request: mp_malloc_sync 00:10:18.090 EAL: No shared files mode enabled, IPC is disabled 00:10:18.090 EAL: Heap on socket 0 was expanded by 258MB 00:10:18.348 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.607 EAL: request: mp_malloc_sync 00:10:18.607 EAL: No shared files mode enabled, IPC is disabled 00:10:18.607 EAL: Heap on socket 0 was shrunk by 258MB 00:10:18.865 EAL: Trying to obtain current memory policy. 00:10:18.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:18.865 EAL: Restoring previous memory policy: 0 00:10:18.865 EAL: Calling mem event callback 'spdk:(nil)' 00:10:18.865 EAL: request: mp_malloc_sync 00:10:18.865 EAL: No shared files mode enabled, IPC is disabled 00:10:18.865 EAL: Heap on socket 0 was expanded by 514MB 00:10:19.873 EAL: Calling mem event callback 'spdk:(nil)' 00:10:19.873 EAL: request: mp_malloc_sync 00:10:19.873 EAL: No shared files mode enabled, IPC is disabled 00:10:19.873 EAL: Heap on socket 0 was shrunk by 514MB 00:10:20.808 EAL: Trying to obtain current memory policy. 00:10:20.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.808 EAL: Restoring previous memory policy: 0 00:10:20.808 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.808 EAL: request: mp_malloc_sync 00:10:20.808 EAL: No shared files mode enabled, IPC is disabled 00:10:20.808 EAL: Heap on socket 0 was expanded by 1026MB 00:10:22.707 EAL: Calling mem event callback 'spdk:(nil)' 00:10:22.707 EAL: request: mp_malloc_sync 00:10:22.707 EAL: No shared files mode enabled, IPC is disabled 00:10:22.707 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:24.084 passed 00:10:24.084 00:10:24.084 Run Summary: Type Total Ran Passed Failed Inactive 00:10:24.084 suites 1 1 n/a 0 0 00:10:24.084 tests 2 2 2 0 0 00:10:24.084 asserts 6496 6496 6496 0 n/a 00:10:24.084 00:10:24.084 Elapsed time = 7.284 seconds 00:10:24.084 EAL: Calling mem event callback 'spdk:(nil)' 00:10:24.084 EAL: request: mp_malloc_sync 00:10:24.084 EAL: No shared files mode enabled, IPC is disabled 00:10:24.084 EAL: Heap on socket 0 was shrunk by 2MB 00:10:24.084 EAL: No shared files mode enabled, IPC is disabled 00:10:24.084 EAL: No shared files mode enabled, IPC is disabled 00:10:24.084 EAL: No shared files mode enabled, IPC is disabled 00:10:24.084 00:10:24.084 real 0m7.601s 00:10:24.084 user 0m6.489s 00:10:24.084 sys 0m0.964s 00:10:24.084 12:54:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:24.084 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.084 ************************************ 00:10:24.084 END TEST env_vtophys 00:10:24.084 ************************************ 00:10:24.084 12:54:28 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:24.084 12:54:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:24.084 12:54:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:24.084 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.084 ************************************ 00:10:24.084 START TEST env_pci 00:10:24.084 ************************************ 00:10:24.084 12:54:28 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:24.084 00:10:24.084 00:10:24.084 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.084 http://cunit.sourceforge.net/ 00:10:24.084 00:10:24.084 00:10:24.084 Suite: pci 00:10:24.084 Test: pci_hook ...[2024-04-17 12:54:28.205405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 109477 has claimed it 00:10:24.343 passed 00:10:24.343 00:10:24.343 EAL: Cannot find device (10000:00:01.0) 00:10:24.343 EAL: Failed to attach device on primary process 00:10:24.343 Run Summary: Type Total Ran Passed Failed Inactive 00:10:24.343 suites 1 1 n/a 0 0 00:10:24.343 tests 1 1 1 0 0 00:10:24.343 asserts 25 25 25 0 n/a 00:10:24.343 00:10:24.343 Elapsed time = 0.006 seconds 00:10:24.343 00:10:24.343 real 0m0.086s 00:10:24.343 user 0m0.040s 00:10:24.343 sys 0m0.046s 00:10:24.343 12:54:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:24.343 ************************************ 00:10:24.343 END TEST env_pci 00:10:24.343 ************************************ 00:10:24.343 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.343 12:54:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:24.343 12:54:28 -- env/env.sh@15 -- # uname 00:10:24.343 12:54:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:24.343 12:54:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:24.343 12:54:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:24.343 12:54:28 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:24.343 12:54:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:24.343 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.343 ************************************ 00:10:24.343 START TEST env_dpdk_post_init 00:10:24.343 ************************************ 00:10:24.343 12:54:28 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:24.343 EAL: Detected CPU lcores: 10 00:10:24.343 EAL: Detected NUMA nodes: 1 00:10:24.343 EAL: Detected static linkage of DPDK 00:10:24.343 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:24.343 EAL: Selected IOVA mode 'PA' 00:10:24.343 EAL: VFIO support initialized 00:10:24.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:24.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:24.602 Starting DPDK initialization... 00:10:24.602 Starting SPDK post initialization... 00:10:24.602 SPDK NVMe probe 00:10:24.602 Attaching to 0000:00:10.0 00:10:24.602 Attached to 0000:00:10.0 00:10:24.602 Cleaning up... 00:10:24.602 00:10:24.602 real 0m0.275s 00:10:24.602 user 0m0.079s 00:10:24.602 sys 0m0.096s 00:10:24.602 12:54:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:24.602 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 ************************************ 00:10:24.602 END TEST env_dpdk_post_init 00:10:24.602 ************************************ 00:10:24.602 12:54:28 -- env/env.sh@26 -- # uname 00:10:24.602 12:54:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:24.602 12:54:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:24.602 12:54:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:24.602 12:54:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:24.602 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.602 ************************************ 00:10:24.602 START TEST env_mem_callbacks 00:10:24.602 ************************************ 00:10:24.602 12:54:28 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:24.602 EAL: Detected CPU lcores: 10 00:10:24.602 EAL: Detected NUMA nodes: 1 00:10:24.602 EAL: Detected static linkage of DPDK 00:10:24.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:24.862 EAL: Selected IOVA mode 'PA' 00:10:24.862 EAL: VFIO support initialized 00:10:24.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:24.862 00:10:24.862 00:10:24.862 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.862 http://cunit.sourceforge.net/ 00:10:24.862 00:10:24.862 00:10:24.862 Suite: memory 00:10:24.862 Test: test ... 00:10:24.862 register 0x200000200000 2097152 00:10:24.862 malloc 3145728 00:10:24.862 register 0x200000400000 4194304 00:10:24.862 buf 0x2000004fffc0 len 3145728 PASSED 00:10:24.862 malloc 64 00:10:24.862 buf 0x2000004ffec0 len 64 PASSED 00:10:24.862 malloc 4194304 00:10:24.862 register 0x200000800000 6291456 00:10:24.862 buf 0x2000009fffc0 len 4194304 PASSED 00:10:24.862 free 0x2000004fffc0 3145728 00:10:24.862 free 0x2000004ffec0 64 00:10:24.862 unregister 0x200000400000 4194304 PASSED 00:10:24.862 free 0x2000009fffc0 4194304 00:10:24.862 unregister 0x200000800000 6291456 PASSED 00:10:24.862 malloc 8388608 00:10:24.862 register 0x200000400000 10485760 00:10:24.862 buf 0x2000005fffc0 len 8388608 PASSED 00:10:24.862 free 0x2000005fffc0 8388608 00:10:24.862 unregister 0x200000400000 10485760 PASSED 00:10:24.862 passed 00:10:24.862 00:10:24.862 Run Summary: Type Total Ran Passed Failed Inactive 00:10:24.862 suites 1 1 n/a 0 0 00:10:24.862 tests 1 1 1 0 0 00:10:24.862 asserts 15 15 15 0 n/a 00:10:24.862 00:10:24.862 Elapsed time = 0.059 seconds 00:10:24.862 00:10:24.862 real 0m0.276s 00:10:24.862 user 0m0.112s 00:10:24.862 sys 0m0.064s 00:10:24.862 12:54:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:24.862 12:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.862 ************************************ 00:10:24.862 END TEST env_mem_callbacks 00:10:24.862 ************************************ 00:10:25.121 00:10:25.121 real 0m9.010s 00:10:25.121 user 0m7.249s 00:10:25.121 sys 0m1.409s 00:10:25.121 12:54:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:25.121 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:10:25.121 ************************************ 00:10:25.121 END TEST env 00:10:25.121 ************************************ 00:10:25.121 12:54:29 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:25.121 12:54:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:25.121 12:54:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:25.121 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:10:25.121 ************************************ 00:10:25.121 START TEST rpc 00:10:25.121 ************************************ 00:10:25.121 12:54:29 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:25.121 * Looking for test storage... 00:10:25.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:25.121 12:54:29 -- rpc/rpc.sh@65 -- # spdk_pid=109622 00:10:25.121 12:54:29 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:25.121 12:54:29 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:25.121 12:54:29 -- rpc/rpc.sh@67 -- # waitforlisten 109622 00:10:25.121 12:54:29 -- common/autotest_common.sh@817 -- # '[' -z 109622 ']' 00:10:25.121 12:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.121 12:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:25.121 12:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.121 12:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:25.121 12:54:29 -- common/autotest_common.sh@10 -- # set +x 00:10:25.121 [2024-04-17 12:54:29.235141] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:25.121 [2024-04-17 12:54:29.235316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109622 ] 00:10:25.379 [2024-04-17 12:54:29.395638] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.638 [2024-04-17 12:54:29.643759] app.c: 521:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:25.638 [2024-04-17 12:54:29.643908] app.c: 522:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 109622' to capture a snapshot of events at runtime. 00:10:25.638 [2024-04-17 12:54:29.643958] app.c: 527:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.638 [2024-04-17 12:54:29.643983] app.c: 528:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.638 [2024-04-17 12:54:29.644038] app.c: 529:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid109622 for offline analysis/debug. 00:10:25.638 [2024-04-17 12:54:29.644134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.575 12:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:26.575 12:54:30 -- common/autotest_common.sh@850 -- # return 0 00:10:26.575 12:54:30 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:26.575 12:54:30 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:26.575 12:54:30 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:26.575 12:54:30 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:26.575 12:54:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:26.575 12:54:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:26.575 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.575 ************************************ 00:10:26.575 START TEST rpc_integrity 00:10:26.575 ************************************ 00:10:26.575 12:54:30 -- common/autotest_common.sh@1099 -- # rpc_integrity 00:10:26.575 12:54:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:26.575 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.575 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.575 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.575 12:54:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:26.575 12:54:30 -- rpc/rpc.sh@13 -- # jq length 00:10:26.575 12:54:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:26.575 12:54:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:26.575 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.575 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.575 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.575 12:54:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:26.575 12:54:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:26.575 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.575 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.575 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.575 12:54:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:26.575 { 00:10:26.575 "name": "Malloc0", 00:10:26.575 "aliases": [ 00:10:26.575 "240759ef-00b1-4093-9dfb-c27c71e03887" 00:10:26.575 ], 00:10:26.575 "product_name": "Malloc disk", 00:10:26.576 "block_size": 512, 00:10:26.576 "num_blocks": 16384, 00:10:26.576 "uuid": "240759ef-00b1-4093-9dfb-c27c71e03887", 00:10:26.576 "assigned_rate_limits": { 00:10:26.576 "rw_ios_per_sec": 0, 00:10:26.576 "rw_mbytes_per_sec": 0, 00:10:26.576 "r_mbytes_per_sec": 0, 00:10:26.576 "w_mbytes_per_sec": 0 00:10:26.576 }, 00:10:26.576 "claimed": false, 00:10:26.576 "zoned": false, 00:10:26.576 "supported_io_types": { 00:10:26.576 "read": true, 00:10:26.576 "write": true, 00:10:26.576 "unmap": true, 00:10:26.576 "write_zeroes": true, 00:10:26.576 "flush": true, 00:10:26.576 "reset": true, 00:10:26.576 "compare": false, 00:10:26.576 "compare_and_write": false, 00:10:26.576 "abort": true, 00:10:26.576 "nvme_admin": false, 00:10:26.576 "nvme_io": false 00:10:26.576 }, 00:10:26.576 "memory_domains": [ 00:10:26.576 { 00:10:26.576 "dma_device_id": "system", 00:10:26.576 "dma_device_type": 1 00:10:26.576 }, 00:10:26.576 { 00:10:26.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.576 "dma_device_type": 2 00:10:26.576 } 00:10:26.576 ], 00:10:26.576 "driver_specific": {} 00:10:26.576 } 00:10:26.576 ]' 00:10:26.576 12:54:30 -- rpc/rpc.sh@17 -- # jq length 00:10:26.576 12:54:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:26.576 12:54:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:26.576 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 [2024-04-17 12:54:30.602973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:26.576 [2024-04-17 12:54:30.603098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.576 [2024-04-17 12:54:30.603150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:26.576 [2024-04-17 12:54:30.603192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.576 [2024-04-17 12:54:30.606010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.576 [2024-04-17 12:54:30.606081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:26.576 Passthru0 00:10:26.576 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 12:54:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:26.576 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 12:54:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:26.576 { 00:10:26.576 "name": "Malloc0", 00:10:26.576 "aliases": [ 00:10:26.576 "240759ef-00b1-4093-9dfb-c27c71e03887" 00:10:26.576 ], 00:10:26.576 "product_name": "Malloc disk", 00:10:26.576 "block_size": 512, 00:10:26.576 "num_blocks": 16384, 00:10:26.576 "uuid": "240759ef-00b1-4093-9dfb-c27c71e03887", 00:10:26.576 "assigned_rate_limits": { 00:10:26.576 "rw_ios_per_sec": 0, 00:10:26.576 "rw_mbytes_per_sec": 0, 00:10:26.576 "r_mbytes_per_sec": 0, 00:10:26.576 "w_mbytes_per_sec": 0 00:10:26.576 }, 00:10:26.576 "claimed": true, 00:10:26.576 "claim_type": "exclusive_write", 00:10:26.576 "zoned": false, 00:10:26.576 "supported_io_types": { 00:10:26.576 "read": true, 00:10:26.576 "write": true, 00:10:26.576 "unmap": true, 00:10:26.576 "write_zeroes": true, 00:10:26.576 "flush": true, 00:10:26.576 "reset": true, 00:10:26.576 "compare": false, 00:10:26.576 "compare_and_write": false, 00:10:26.576 "abort": true, 00:10:26.576 "nvme_admin": false, 00:10:26.576 "nvme_io": false 00:10:26.576 }, 00:10:26.576 "memory_domains": [ 00:10:26.576 { 00:10:26.576 "dma_device_id": "system", 00:10:26.576 "dma_device_type": 1 00:10:26.576 }, 00:10:26.576 { 00:10:26.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.576 "dma_device_type": 2 00:10:26.576 } 00:10:26.576 ], 00:10:26.576 "driver_specific": {} 00:10:26.576 }, 00:10:26.576 { 00:10:26.576 "name": "Passthru0", 00:10:26.576 "aliases": [ 00:10:26.576 "0c780616-c79e-502c-9183-529da85f7664" 00:10:26.576 ], 00:10:26.576 "product_name": "passthru", 00:10:26.576 "block_size": 512, 00:10:26.576 "num_blocks": 16384, 00:10:26.576 "uuid": "0c780616-c79e-502c-9183-529da85f7664", 00:10:26.576 "assigned_rate_limits": { 00:10:26.576 "rw_ios_per_sec": 0, 00:10:26.576 "rw_mbytes_per_sec": 0, 00:10:26.576 "r_mbytes_per_sec": 0, 00:10:26.576 "w_mbytes_per_sec": 0 00:10:26.576 }, 00:10:26.576 "claimed": false, 00:10:26.576 "zoned": false, 00:10:26.576 "supported_io_types": { 00:10:26.576 "read": true, 00:10:26.576 "write": true, 00:10:26.576 "unmap": true, 00:10:26.576 "write_zeroes": true, 00:10:26.576 "flush": true, 00:10:26.576 "reset": true, 00:10:26.576 "compare": false, 00:10:26.576 "compare_and_write": false, 00:10:26.576 "abort": true, 00:10:26.576 "nvme_admin": false, 00:10:26.576 "nvme_io": false 00:10:26.576 }, 00:10:26.576 "memory_domains": [ 00:10:26.576 { 00:10:26.576 "dma_device_id": "system", 00:10:26.576 "dma_device_type": 1 00:10:26.576 }, 00:10:26.576 { 00:10:26.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.576 "dma_device_type": 2 00:10:26.576 } 00:10:26.576 ], 00:10:26.576 "driver_specific": { 00:10:26.576 "passthru": { 00:10:26.576 "name": "Passthru0", 00:10:26.576 "base_bdev_name": "Malloc0" 00:10:26.576 } 00:10:26.576 } 00:10:26.576 } 00:10:26.576 ]' 00:10:26.576 12:54:30 -- rpc/rpc.sh@21 -- # jq length 00:10:26.576 12:54:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:26.576 12:54:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:26.576 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 12:54:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:26.576 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 12:54:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:26.576 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.576 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.576 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.576 12:54:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:26.576 12:54:30 -- rpc/rpc.sh@26 -- # jq length 00:10:26.902 12:54:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:26.902 00:10:26.902 real 0m0.321s 00:10:26.902 user 0m0.207s 00:10:26.902 sys 0m0.023s 00:10:26.902 12:54:30 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 ************************************ 00:10:26.902 END TEST rpc_integrity 00:10:26.902 ************************************ 00:10:26.902 12:54:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:26.902 12:54:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:26.902 12:54:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 ************************************ 00:10:26.902 START TEST rpc_plugins 00:10:26.902 ************************************ 00:10:26.902 12:54:30 -- common/autotest_common.sh@1099 -- # rpc_plugins 00:10:26.902 12:54:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:26.902 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.902 12:54:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:26.902 12:54:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:26.902 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.902 12:54:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:26.902 { 00:10:26.902 "name": "Malloc1", 00:10:26.902 "aliases": [ 00:10:26.902 "1e93fdf5-c309-4132-8ac4-aaa541cbb58d" 00:10:26.902 ], 00:10:26.902 "product_name": "Malloc disk", 00:10:26.902 "block_size": 4096, 00:10:26.902 "num_blocks": 256, 00:10:26.902 "uuid": "1e93fdf5-c309-4132-8ac4-aaa541cbb58d", 00:10:26.902 "assigned_rate_limits": { 00:10:26.902 "rw_ios_per_sec": 0, 00:10:26.902 "rw_mbytes_per_sec": 0, 00:10:26.902 "r_mbytes_per_sec": 0, 00:10:26.902 "w_mbytes_per_sec": 0 00:10:26.902 }, 00:10:26.902 "claimed": false, 00:10:26.902 "zoned": false, 00:10:26.902 "supported_io_types": { 00:10:26.902 "read": true, 00:10:26.902 "write": true, 00:10:26.902 "unmap": true, 00:10:26.902 "write_zeroes": true, 00:10:26.902 "flush": true, 00:10:26.902 "reset": true, 00:10:26.902 "compare": false, 00:10:26.902 "compare_and_write": false, 00:10:26.902 "abort": true, 00:10:26.902 "nvme_admin": false, 00:10:26.902 "nvme_io": false 00:10:26.902 }, 00:10:26.902 "memory_domains": [ 00:10:26.902 { 00:10:26.902 "dma_device_id": "system", 00:10:26.902 "dma_device_type": 1 00:10:26.902 }, 00:10:26.902 { 00:10:26.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.902 "dma_device_type": 2 00:10:26.902 } 00:10:26.902 ], 00:10:26.902 "driver_specific": {} 00:10:26.902 } 00:10:26.902 ]' 00:10:26.902 12:54:30 -- rpc/rpc.sh@32 -- # jq length 00:10:26.902 12:54:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:26.902 12:54:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:26.902 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.902 12:54:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:26.902 12:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.902 12:54:30 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 12:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.902 12:54:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:26.902 12:54:30 -- rpc/rpc.sh@36 -- # jq length 00:10:26.902 12:54:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:26.902 00:10:26.902 real 0m0.164s 00:10:26.902 user 0m0.107s 00:10:26.902 sys 0m0.014s 00:10:26.902 12:54:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:26.902 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:26.902 ************************************ 00:10:26.902 END TEST rpc_plugins 00:10:26.902 ************************************ 00:10:27.162 12:54:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:27.162 12:54:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:27.162 12:54:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:27.162 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 ************************************ 00:10:27.162 START TEST rpc_trace_cmd_test 00:10:27.162 ************************************ 00:10:27.162 12:54:31 -- common/autotest_common.sh@1099 -- # rpc_trace_cmd_test 00:10:27.162 12:54:31 -- rpc/rpc.sh@40 -- # local info 00:10:27.162 12:54:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:27.162 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.162 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.162 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.162 12:54:31 -- rpc/rpc.sh@42 -- # info='{ 00:10:27.162 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid109622", 00:10:27.162 "tpoint_group_mask": "0x8", 00:10:27.162 "iscsi_conn": { 00:10:27.162 "mask": "0x2", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "scsi": { 00:10:27.162 "mask": "0x4", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "bdev": { 00:10:27.162 "mask": "0x8", 00:10:27.162 "tpoint_mask": "0xffffffffffffffff" 00:10:27.162 }, 00:10:27.162 "nvmf_rdma": { 00:10:27.162 "mask": "0x10", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "nvmf_tcp": { 00:10:27.162 "mask": "0x20", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "ftl": { 00:10:27.162 "mask": "0x40", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "blobfs": { 00:10:27.162 "mask": "0x80", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "dsa": { 00:10:27.162 "mask": "0x200", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "thread": { 00:10:27.162 "mask": "0x400", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "nvme_pcie": { 00:10:27.162 "mask": "0x800", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "iaa": { 00:10:27.162 "mask": "0x1000", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "nvme_tcp": { 00:10:27.162 "mask": "0x2000", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "bdev_nvme": { 00:10:27.162 "mask": "0x4000", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 }, 00:10:27.162 "sock": { 00:10:27.162 "mask": "0x8000", 00:10:27.162 "tpoint_mask": "0x0" 00:10:27.162 } 00:10:27.162 }' 00:10:27.162 12:54:31 -- rpc/rpc.sh@43 -- # jq length 00:10:27.162 12:54:31 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:27.162 12:54:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:27.162 12:54:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:27.162 12:54:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:27.162 12:54:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:27.162 12:54:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:27.422 12:54:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:27.422 12:54:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:27.422 ************************************ 00:10:27.422 END TEST rpc_trace_cmd_test 00:10:27.422 ************************************ 00:10:27.422 12:54:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:27.422 00:10:27.422 real 0m0.298s 00:10:27.422 user 0m0.269s 00:10:27.422 sys 0m0.018s 00:10:27.422 12:54:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:27.422 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.422 12:54:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:27.422 12:54:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:27.422 12:54:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:27.422 12:54:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:27.422 12:54:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:27.422 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.422 ************************************ 00:10:27.422 START TEST rpc_daemon_integrity 00:10:27.422 ************************************ 00:10:27.422 12:54:31 -- common/autotest_common.sh@1099 -- # rpc_integrity 00:10:27.422 12:54:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:27.422 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.422 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.422 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.422 12:54:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:27.422 12:54:31 -- rpc/rpc.sh@13 -- # jq length 00:10:27.422 12:54:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:27.422 12:54:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:27.422 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.422 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.422 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.422 12:54:31 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:27.422 12:54:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:27.422 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.422 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.422 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.422 12:54:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:27.422 { 00:10:27.422 "name": "Malloc2", 00:10:27.422 "aliases": [ 00:10:27.422 "d4f7d6dc-e4e9-4039-879b-1b45deaf3ad1" 00:10:27.422 ], 00:10:27.422 "product_name": "Malloc disk", 00:10:27.422 "block_size": 512, 00:10:27.422 "num_blocks": 16384, 00:10:27.422 "uuid": "d4f7d6dc-e4e9-4039-879b-1b45deaf3ad1", 00:10:27.422 "assigned_rate_limits": { 00:10:27.422 "rw_ios_per_sec": 0, 00:10:27.422 "rw_mbytes_per_sec": 0, 00:10:27.422 "r_mbytes_per_sec": 0, 00:10:27.422 "w_mbytes_per_sec": 0 00:10:27.422 }, 00:10:27.422 "claimed": false, 00:10:27.422 "zoned": false, 00:10:27.422 "supported_io_types": { 00:10:27.422 "read": true, 00:10:27.422 "write": true, 00:10:27.422 "unmap": true, 00:10:27.422 "write_zeroes": true, 00:10:27.422 "flush": true, 00:10:27.422 "reset": true, 00:10:27.422 "compare": false, 00:10:27.422 "compare_and_write": false, 00:10:27.422 "abort": true, 00:10:27.422 "nvme_admin": false, 00:10:27.422 "nvme_io": false 00:10:27.422 }, 00:10:27.422 "memory_domains": [ 00:10:27.422 { 00:10:27.422 "dma_device_id": "system", 00:10:27.422 "dma_device_type": 1 00:10:27.422 }, 00:10:27.422 { 00:10:27.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.422 "dma_device_type": 2 00:10:27.422 } 00:10:27.422 ], 00:10:27.422 "driver_specific": {} 00:10:27.422 } 00:10:27.422 ]' 00:10:27.422 12:54:31 -- rpc/rpc.sh@17 -- # jq length 00:10:27.681 12:54:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:27.681 12:54:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:27.681 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.681 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.681 [2024-04-17 12:54:31.586660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:27.681 [2024-04-17 12:54:31.586910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.681 [2024-04-17 12:54:31.587060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.681 [2024-04-17 12:54:31.587203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.681 [2024-04-17 12:54:31.590009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.681 [2024-04-17 12:54:31.590185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:27.681 Passthru0 00:10:27.681 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.681 12:54:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:27.681 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.681 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.681 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.681 12:54:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:27.681 { 00:10:27.681 "name": "Malloc2", 00:10:27.681 "aliases": [ 00:10:27.681 "d4f7d6dc-e4e9-4039-879b-1b45deaf3ad1" 00:10:27.681 ], 00:10:27.681 "product_name": "Malloc disk", 00:10:27.681 "block_size": 512, 00:10:27.681 "num_blocks": 16384, 00:10:27.681 "uuid": "d4f7d6dc-e4e9-4039-879b-1b45deaf3ad1", 00:10:27.681 "assigned_rate_limits": { 00:10:27.681 "rw_ios_per_sec": 0, 00:10:27.681 "rw_mbytes_per_sec": 0, 00:10:27.681 "r_mbytes_per_sec": 0, 00:10:27.681 "w_mbytes_per_sec": 0 00:10:27.681 }, 00:10:27.681 "claimed": true, 00:10:27.681 "claim_type": "exclusive_write", 00:10:27.681 "zoned": false, 00:10:27.681 "supported_io_types": { 00:10:27.681 "read": true, 00:10:27.681 "write": true, 00:10:27.681 "unmap": true, 00:10:27.681 "write_zeroes": true, 00:10:27.681 "flush": true, 00:10:27.681 "reset": true, 00:10:27.681 "compare": false, 00:10:27.681 "compare_and_write": false, 00:10:27.681 "abort": true, 00:10:27.681 "nvme_admin": false, 00:10:27.681 "nvme_io": false 00:10:27.681 }, 00:10:27.681 "memory_domains": [ 00:10:27.681 { 00:10:27.681 "dma_device_id": "system", 00:10:27.681 "dma_device_type": 1 00:10:27.681 }, 00:10:27.681 { 00:10:27.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.681 "dma_device_type": 2 00:10:27.681 } 00:10:27.681 ], 00:10:27.681 "driver_specific": {} 00:10:27.681 }, 00:10:27.681 { 00:10:27.681 "name": "Passthru0", 00:10:27.681 "aliases": [ 00:10:27.681 "bfe23549-4ad4-5ab0-b876-6bacba00ab3f" 00:10:27.681 ], 00:10:27.681 "product_name": "passthru", 00:10:27.681 "block_size": 512, 00:10:27.681 "num_blocks": 16384, 00:10:27.681 "uuid": "bfe23549-4ad4-5ab0-b876-6bacba00ab3f", 00:10:27.681 "assigned_rate_limits": { 00:10:27.681 "rw_ios_per_sec": 0, 00:10:27.681 "rw_mbytes_per_sec": 0, 00:10:27.681 "r_mbytes_per_sec": 0, 00:10:27.681 "w_mbytes_per_sec": 0 00:10:27.681 }, 00:10:27.681 "claimed": false, 00:10:27.681 "zoned": false, 00:10:27.681 "supported_io_types": { 00:10:27.681 "read": true, 00:10:27.681 "write": true, 00:10:27.682 "unmap": true, 00:10:27.682 "write_zeroes": true, 00:10:27.682 "flush": true, 00:10:27.682 "reset": true, 00:10:27.682 "compare": false, 00:10:27.682 "compare_and_write": false, 00:10:27.682 "abort": true, 00:10:27.682 "nvme_admin": false, 00:10:27.682 "nvme_io": false 00:10:27.682 }, 00:10:27.682 "memory_domains": [ 00:10:27.682 { 00:10:27.682 "dma_device_id": "system", 00:10:27.682 "dma_device_type": 1 00:10:27.682 }, 00:10:27.682 { 00:10:27.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.682 "dma_device_type": 2 00:10:27.682 } 00:10:27.682 ], 00:10:27.682 "driver_specific": { 00:10:27.682 "passthru": { 00:10:27.682 "name": "Passthru0", 00:10:27.682 "base_bdev_name": "Malloc2" 00:10:27.682 } 00:10:27.682 } 00:10:27.682 } 00:10:27.682 ]' 00:10:27.682 12:54:31 -- rpc/rpc.sh@21 -- # jq length 00:10:27.682 12:54:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:27.682 12:54:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:27.682 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.682 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.682 12:54:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:27.682 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.682 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.682 12:54:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:27.682 12:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.682 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 12:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.682 12:54:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:27.682 12:54:31 -- rpc/rpc.sh@26 -- # jq length 00:10:27.682 12:54:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:27.682 00:10:27.682 real 0m0.322s 00:10:27.682 user 0m0.210s 00:10:27.682 sys 0m0.022s 00:10:27.682 12:54:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:27.682 12:54:31 -- common/autotest_common.sh@10 -- # set +x 00:10:27.682 ************************************ 00:10:27.682 END TEST rpc_daemon_integrity 00:10:27.682 ************************************ 00:10:27.682 12:54:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:27.682 12:54:31 -- rpc/rpc.sh@84 -- # killprocess 109622 00:10:27.682 12:54:31 -- common/autotest_common.sh@924 -- # '[' -z 109622 ']' 00:10:27.682 12:54:31 -- common/autotest_common.sh@928 -- # kill -0 109622 00:10:27.682 12:54:31 -- common/autotest_common.sh@929 -- # uname 00:10:27.682 12:54:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:27.682 12:54:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109622 00:10:27.682 killing process with pid 109622 00:10:27.682 12:54:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:27.682 12:54:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:27.682 12:54:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109622' 00:10:27.682 12:54:31 -- common/autotest_common.sh@943 -- # kill 109622 00:10:27.682 12:54:31 -- common/autotest_common.sh@948 -- # wait 109622 00:10:30.215 ************************************ 00:10:30.215 END TEST rpc 00:10:30.215 ************************************ 00:10:30.215 00:10:30.215 real 0m4.900s 00:10:30.215 user 0m5.740s 00:10:30.215 sys 0m0.730s 00:10:30.215 12:54:33 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:30.215 12:54:33 -- common/autotest_common.sh@10 -- # set +x 00:10:30.215 12:54:34 -- spdk/autotest.sh@166 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:30.215 12:54:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:30.215 12:54:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:30.215 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.215 ************************************ 00:10:30.215 START TEST rpc_client 00:10:30.215 ************************************ 00:10:30.215 12:54:34 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:30.215 * Looking for test storage... 00:10:30.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:30.215 12:54:34 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:30.215 OK 00:10:30.215 12:54:34 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:30.215 00:10:30.215 real 0m0.128s 00:10:30.215 user 0m0.069s 00:10:30.215 sys 0m0.068s 00:10:30.215 12:54:34 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:30.215 ************************************ 00:10:30.215 END TEST rpc_client 00:10:30.215 ************************************ 00:10:30.215 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.215 12:54:34 -- spdk/autotest.sh@167 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:30.215 12:54:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:30.215 12:54:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:30.215 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.215 ************************************ 00:10:30.215 START TEST json_config 00:10:30.215 ************************************ 00:10:30.215 12:54:34 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:30.215 12:54:34 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.215 12:54:34 -- nvmf/common.sh@7 -- # uname -s 00:10:30.215 12:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.215 12:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.215 12:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.215 12:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.215 12:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.215 12:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.215 12:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.215 12:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.215 12:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.215 12:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.215 12:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dd58517d-5367-4013-b2f4-71b81970c4d7 00:10:30.215 12:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=dd58517d-5367-4013-b2f4-71b81970c4d7 00:10:30.215 12:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.215 12:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.215 12:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:30.215 12:54:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.215 12:54:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.215 12:54:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.215 12:54:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.215 12:54:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.215 12:54:34 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.215 12:54:34 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.215 12:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.215 12:54:34 -- paths/export.sh@5 -- # export PATH 00:10:30.215 12:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:30.215 12:54:34 -- nvmf/common.sh@47 -- # : 0 00:10:30.215 12:54:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.215 12:54:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.215 12:54:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.215 12:54:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.215 12:54:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.215 12:54:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.215 12:54:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.215 12:54:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.215 12:54:34 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:30.215 12:54:34 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:30.215 12:54:34 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:30.215 12:54:34 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:30.215 12:54:34 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:30.215 12:54:34 -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:10:30.215 12:54:34 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:30.215 12:54:34 -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:10:30.215 12:54:34 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:30.215 12:54:34 -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:10:30.215 12:54:34 -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:30.215 12:54:34 -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:10:30.215 12:54:34 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:30.215 12:54:34 -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:30.215 12:54:34 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:30.215 12:54:34 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:10:30.215 INFO: JSON configuration test init 00:10:30.215 12:54:34 -- json_config/json_config.sh@357 -- # json_config_test_init 00:10:30.216 12:54:34 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:10:30.216 12:54:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:30.216 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.216 12:54:34 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:10:30.216 12:54:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:30.216 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.216 12:54:34 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:10:30.216 12:54:34 -- json_config/common.sh@9 -- # local app=target 00:10:30.216 12:54:34 -- json_config/common.sh@10 -- # shift 00:10:30.216 12:54:34 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:30.216 12:54:34 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:30.216 12:54:34 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:30.216 12:54:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.216 12:54:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:30.216 12:54:34 -- json_config/common.sh@22 -- # app_pid["$app"]=109957 00:10:30.216 12:54:34 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:30.216 Waiting for target to run... 00:10:30.216 12:54:34 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:30.216 12:54:34 -- json_config/common.sh@25 -- # waitforlisten 109957 /var/tmp/spdk_tgt.sock 00:10:30.216 12:54:34 -- common/autotest_common.sh@817 -- # '[' -z 109957 ']' 00:10:30.216 12:54:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:30.216 12:54:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:30.216 12:54:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:30.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:30.216 12:54:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:30.216 12:54:34 -- common/autotest_common.sh@10 -- # set +x 00:10:30.473 [2024-04-17 12:54:34.396429] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:30.473 [2024-04-17 12:54:34.396772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109957 ] 00:10:30.732 [2024-04-17 12:54:34.861288] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.991 [2024-04-17 12:54:35.075638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.250 12:54:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:31.250 12:54:35 -- common/autotest_common.sh@850 -- # return 0 00:10:31.250 12:54:35 -- json_config/common.sh@26 -- # echo '' 00:10:31.250 00:10:31.250 12:54:35 -- json_config/json_config.sh@269 -- # create_accel_config 00:10:31.250 12:54:35 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:10:31.250 12:54:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:31.250 12:54:35 -- common/autotest_common.sh@10 -- # set +x 00:10:31.508 12:54:35 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:10:31.508 12:54:35 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:10:31.508 12:54:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:31.508 12:54:35 -- common/autotest_common.sh@10 -- # set +x 00:10:31.508 12:54:35 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:31.508 12:54:35 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:10:31.508 12:54:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:32.443 12:54:36 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:10:32.443 12:54:36 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:32.443 12:54:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:32.443 12:54:36 -- common/autotest_common.sh@10 -- # set +x 00:10:32.443 12:54:36 -- json_config/json_config.sh@45 -- # local ret=0 00:10:32.443 12:54:36 -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:10:32.443 12:54:36 -- json_config/json_config.sh@46 -- # local enabled_types 00:10:32.443 12:54:36 -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:10:32.443 12:54:36 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:32.443 12:54:36 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:32.443 12:54:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:32.734 12:54:36 -- json_config/json_config.sh@48 -- # local get_types 00:10:32.734 12:54:36 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:32.734 12:54:36 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:10:32.734 12:54:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:32.734 12:54:36 -- common/autotest_common.sh@10 -- # set +x 00:10:32.734 12:54:36 -- json_config/json_config.sh@55 -- # return 0 00:10:32.734 12:54:36 -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:10:32.734 12:54:36 -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:10:32.734 12:54:36 -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:10:32.734 12:54:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:32.734 12:54:36 -- common/autotest_common.sh@10 -- # set +x 00:10:32.734 12:54:36 -- json_config/json_config.sh@107 -- # expected_notifications=() 00:10:32.734 12:54:36 -- json_config/json_config.sh@107 -- # local expected_notifications 00:10:32.734 12:54:36 -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:10:32.734 12:54:36 -- json_config/json_config.sh@111 -- # get_notifications 00:10:32.734 12:54:36 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:32.734 12:54:36 -- json_config/json_config.sh@61 -- # IFS=: 00:10:32.734 12:54:36 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:32.734 12:54:36 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:32.734 12:54:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:32.734 12:54:36 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:32.992 12:54:36 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:32.992 12:54:36 -- json_config/json_config.sh@61 -- # IFS=: 00:10:32.992 12:54:36 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:32.992 12:54:36 -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:10:32.992 12:54:36 -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:10:32.992 12:54:36 -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:32.992 12:54:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:33.250 Nvme0n1p0 Nvme0n1p1 00:10:33.250 12:54:37 -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:33.250 12:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:33.509 [2024-04-17 12:54:37.453634] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:33.509 [2024-04-17 12:54:37.453933] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:33.509 00:10:33.509 12:54:37 -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:33.509 12:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:33.767 Malloc3 00:10:33.767 12:54:37 -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:33.767 12:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:33.767 [2024-04-17 12:54:37.910533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:34.026 [2024-04-17 12:54:37.910839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.026 [2024-04-17 12:54:37.910923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:34.026 [2024-04-17 12:54:37.911165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.026 [2024-04-17 12:54:37.913811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.026 [2024-04-17 12:54:37.913983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:34.026 PTBdevFromMalloc3 00:10:34.026 12:54:37 -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:34.026 12:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:34.026 Null0 00:10:34.026 12:54:38 -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:34.026 12:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:34.284 Malloc0 00:10:34.284 12:54:38 -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:34.284 12:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:34.542 Malloc1 00:10:34.542 12:54:38 -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:34.542 12:54:38 -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:35.109 102400+0 records in 00:10:35.109 102400+0 records out 00:10:35.109 104857600 bytes (105 MB, 100 MiB) copied, 0.316903 s, 331 MB/s 00:10:35.109 12:54:38 -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:35.109 12:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:35.109 aio_disk 00:10:35.109 12:54:39 -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:35.109 12:54:39 -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:35.109 12:54:39 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:35.368 6630a86a-c94b-44ec-a6e3-e90f386fcd3a 00:10:35.368 12:54:39 -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:35.368 12:54:39 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:35.368 12:54:39 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:35.633 12:54:39 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:35.633 12:54:39 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:35.905 12:54:39 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:35.905 12:54:39 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:36.163 12:54:40 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:36.164 12:54:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:36.422 12:54:40 -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:36.422 12:54:40 -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:36.422 12:54:40 -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:786379f2-3020-445a-918c-64ce75998ecb bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 00:10:36.422 12:54:40 -- json_config/json_config.sh@67 -- # local events_to_check 00:10:36.422 12:54:40 -- json_config/json_config.sh@68 -- # local recorded_events 00:10:36.422 12:54:40 -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:36.422 12:54:40 -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:786379f2-3020-445a-918c-64ce75998ecb bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 00:10:36.422 12:54:40 -- json_config/json_config.sh@71 -- # sort 00:10:36.422 12:54:40 -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:36.422 12:54:40 -- json_config/json_config.sh@72 -- # get_notifications 00:10:36.422 12:54:40 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:36.422 12:54:40 -- json_config/json_config.sh@72 -- # sort 00:10:36.422 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.422 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.422 12:54:40 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:36.422 12:54:40 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:36.422 12:54:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:786379f2-3020-445a-918c-64ce75998ecb 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@62 -- # echo bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # IFS=: 00:10:36.681 12:54:40 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:36.681 12:54:40 -- json_config/json_config.sh@74 -- # [[ bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 bdev_register:786379f2-3020-445a-918c-64ce75998ecb bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\7\f\e\b\5\d\3\-\6\8\3\d\-\4\1\7\8\-\b\2\0\a\-\a\3\9\9\1\b\2\1\a\d\f\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\8\6\3\7\9\f\2\-\3\0\2\0\-\4\4\5\a\-\9\1\8\c\-\6\4\c\e\7\5\9\9\8\e\c\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\3\4\b\5\2\1\7\-\5\4\1\a\-\4\e\8\1\-\8\a\0\4\-\5\5\2\d\e\1\6\3\6\e\5\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\6\6\a\b\a\1\4\-\2\5\2\a\-\4\c\f\6\-\b\2\8\5\-\a\f\9\4\b\e\9\5\3\6\d\3 ]] 00:10:36.681 12:54:40 -- json_config/json_config.sh@86 -- # cat 00:10:36.681 12:54:40 -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 bdev_register:786379f2-3020-445a-918c-64ce75998ecb bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 00:10:36.681 Expected events matched: 00:10:36.681 bdev_register:27feb5d3-683d-4178-b20a-a3991b21adf7 00:10:36.681 bdev_register:786379f2-3020-445a-918c-64ce75998ecb 00:10:36.681 bdev_register:Malloc0 00:10:36.681 bdev_register:Malloc0p0 00:10:36.681 bdev_register:Malloc0p1 00:10:36.681 bdev_register:Malloc0p2 00:10:36.681 bdev_register:Malloc1 00:10:36.681 bdev_register:Malloc3 00:10:36.681 bdev_register:Null0 00:10:36.681 bdev_register:Nvme0n1 00:10:36.681 bdev_register:Nvme0n1p0 00:10:36.681 bdev_register:Nvme0n1p1 00:10:36.681 bdev_register:PTBdevFromMalloc3 00:10:36.681 bdev_register:aio_disk 00:10:36.681 bdev_register:e34b5217-541a-4e81-8a04-552de1636e54 00:10:36.681 bdev_register:f66aba14-252a-4cf6-b285-af94be9536d3 00:10:36.681 12:54:40 -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:36.681 12:54:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:36.681 12:54:40 -- common/autotest_common.sh@10 -- # set +x 00:10:36.681 12:54:40 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:36.681 12:54:40 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:36.681 12:54:40 -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:36.681 12:54:40 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:36.681 12:54:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:36.681 12:54:40 -- common/autotest_common.sh@10 -- # set +x 00:10:36.681 12:54:40 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:36.681 12:54:40 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:36.681 12:54:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:36.939 MallocBdevForConfigChangeCheck 00:10:36.939 12:54:40 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:36.939 12:54:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:36.939 12:54:40 -- common/autotest_common.sh@10 -- # set +x 00:10:36.939 12:54:41 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:36.939 12:54:41 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:37.506 INFO: shutting down applications... 00:10:37.506 12:54:41 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:37.506 12:54:41 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:37.506 12:54:41 -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:37.506 12:54:41 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:37.506 12:54:41 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:37.506 [2024-04-17 12:54:41.544835] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:37.764 Calling clear_vhost_scsi_subsystem 00:10:37.764 Calling clear_iscsi_subsystem 00:10:37.764 Calling clear_vhost_blk_subsystem 00:10:37.764 Calling clear_nbd_subsystem 00:10:37.764 Calling clear_nvmf_subsystem 00:10:37.764 Calling clear_bdev_subsystem 00:10:37.764 12:54:41 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:37.764 12:54:41 -- json_config/json_config.sh@343 -- # count=100 00:10:37.764 12:54:41 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:37.764 12:54:41 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:37.764 12:54:41 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:37.764 12:54:41 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:38.330 12:54:42 -- json_config/json_config.sh@345 -- # break 00:10:38.330 12:54:42 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:38.330 12:54:42 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:38.330 12:54:42 -- json_config/common.sh@31 -- # local app=target 00:10:38.330 12:54:42 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:38.330 12:54:42 -- json_config/common.sh@35 -- # [[ -n 109957 ]] 00:10:38.330 12:54:42 -- json_config/common.sh@38 -- # kill -SIGINT 109957 00:10:38.330 12:54:42 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:38.330 12:54:42 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.330 12:54:42 -- json_config/common.sh@41 -- # kill -0 109957 00:10:38.330 12:54:42 -- json_config/common.sh@45 -- # sleep 0.5 00:10:38.588 12:54:42 -- json_config/common.sh@40 -- # (( i++ )) 00:10:38.588 12:54:42 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:38.588 12:54:42 -- json_config/common.sh@41 -- # kill -0 109957 00:10:38.588 12:54:42 -- json_config/common.sh@45 -- # sleep 0.5 00:10:39.154 SPDK target shutdown done 00:10:39.154 INFO: relaunching applications... 00:10:39.154 Waiting for target to run... 00:10:39.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:39.154 12:54:43 -- json_config/common.sh@40 -- # (( i++ )) 00:10:39.154 12:54:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:39.154 12:54:43 -- json_config/common.sh@41 -- # kill -0 109957 00:10:39.154 12:54:43 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:39.154 12:54:43 -- json_config/common.sh@43 -- # break 00:10:39.154 12:54:43 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:39.154 12:54:43 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:39.154 12:54:43 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:39.154 12:54:43 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:39.154 12:54:43 -- json_config/common.sh@9 -- # local app=target 00:10:39.154 12:54:43 -- json_config/common.sh@10 -- # shift 00:10:39.154 12:54:43 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:39.154 12:54:43 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:39.154 12:54:43 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:39.154 12:54:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:39.154 12:54:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:39.154 12:54:43 -- json_config/common.sh@22 -- # app_pid["$app"]=110226 00:10:39.154 12:54:43 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:39.154 12:54:43 -- json_config/common.sh@25 -- # waitforlisten 110226 /var/tmp/spdk_tgt.sock 00:10:39.154 12:54:43 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:39.154 12:54:43 -- common/autotest_common.sh@817 -- # '[' -z 110226 ']' 00:10:39.154 12:54:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:39.154 12:54:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:39.154 12:54:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:39.154 12:54:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:39.154 12:54:43 -- common/autotest_common.sh@10 -- # set +x 00:10:39.154 [2024-04-17 12:54:43.263948] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:39.154 [2024-04-17 12:54:43.264356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110226 ] 00:10:39.718 [2024-04-17 12:54:43.699961] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.975 [2024-04-17 12:54:43.882167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.542 [2024-04-17 12:54:44.553717] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:40.542 [2024-04-17 12:54:44.553996] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:40.542 [2024-04-17 12:54:44.561678] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:40.542 [2024-04-17 12:54:44.561859] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:40.542 [2024-04-17 12:54:44.569710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:40.542 [2024-04-17 12:54:44.569890] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:40.542 [2024-04-17 12:54:44.570041] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:40.542 [2024-04-17 12:54:44.662828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:40.542 [2024-04-17 12:54:44.663095] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.542 [2024-04-17 12:54:44.663253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:40.542 [2024-04-17 12:54:44.663386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.542 [2024-04-17 12:54:44.664053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.542 [2024-04-17 12:54:44.664216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:40.800 00:10:40.800 INFO: Checking if target configuration is the same... 00:10:40.800 12:54:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:40.800 12:54:44 -- common/autotest_common.sh@850 -- # return 0 00:10:40.800 12:54:44 -- json_config/common.sh@26 -- # echo '' 00:10:40.800 12:54:44 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:40.800 12:54:44 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:40.800 12:54:44 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:40.800 12:54:44 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:40.800 12:54:44 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:40.800 + '[' 2 -ne 2 ']' 00:10:40.800 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:40.800 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:40.800 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:40.800 +++ basename /dev/fd/62 00:10:40.800 ++ mktemp /tmp/62.XXX 00:10:40.800 + tmp_file_1=/tmp/62.mgh 00:10:40.800 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:40.800 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:40.800 + tmp_file_2=/tmp/spdk_tgt_config.json.SPp 00:10:40.800 + ret=0 00:10:40.800 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:41.059 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:41.330 + diff -u /tmp/62.mgh /tmp/spdk_tgt_config.json.SPp 00:10:41.330 INFO: JSON config files are the same 00:10:41.330 + echo 'INFO: JSON config files are the same' 00:10:41.330 + rm /tmp/62.mgh /tmp/spdk_tgt_config.json.SPp 00:10:41.330 + exit 0 00:10:41.330 INFO: changing configuration and checking if this can be detected... 00:10:41.330 12:54:45 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:41.330 12:54:45 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:41.330 12:54:45 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:41.330 12:54:45 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:41.330 12:54:45 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:41.330 12:54:45 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:41.330 12:54:45 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:41.330 + '[' 2 -ne 2 ']' 00:10:41.330 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:41.599 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:41.599 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:41.599 +++ basename /dev/fd/62 00:10:41.599 ++ mktemp /tmp/62.XXX 00:10:41.599 + tmp_file_1=/tmp/62.3Q5 00:10:41.599 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:41.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:41.599 + tmp_file_2=/tmp/spdk_tgt_config.json.oJl 00:10:41.599 + ret=0 00:10:41.599 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:41.857 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:41.857 + diff -u /tmp/62.3Q5 /tmp/spdk_tgt_config.json.oJl 00:10:41.857 + ret=1 00:10:41.857 + echo '=== Start of file: /tmp/62.3Q5 ===' 00:10:41.857 + cat /tmp/62.3Q5 00:10:41.857 + echo '=== End of file: /tmp/62.3Q5 ===' 00:10:41.857 + echo '' 00:10:41.857 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oJl ===' 00:10:41.857 + cat /tmp/spdk_tgt_config.json.oJl 00:10:41.857 + echo '=== End of file: /tmp/spdk_tgt_config.json.oJl ===' 00:10:41.857 + echo '' 00:10:41.857 + rm /tmp/62.3Q5 /tmp/spdk_tgt_config.json.oJl 00:10:41.857 + exit 1 00:10:41.857 INFO: configuration change detected. 00:10:41.857 12:54:45 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:41.857 12:54:45 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:41.857 12:54:45 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:41.857 12:54:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:41.857 12:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:41.857 12:54:45 -- json_config/json_config.sh@307 -- # local ret=0 00:10:41.857 12:54:45 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:41.857 12:54:45 -- json_config/json_config.sh@317 -- # [[ -n 110226 ]] 00:10:41.857 12:54:45 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:41.857 12:54:45 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:41.857 12:54:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:41.857 12:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:41.857 12:54:45 -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:41.857 12:54:45 -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:41.857 12:54:45 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:42.116 12:54:46 -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:42.116 12:54:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:42.375 12:54:46 -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:42.375 12:54:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:42.635 12:54:46 -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:42.635 12:54:46 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:42.894 12:54:46 -- json_config/json_config.sh@193 -- # uname -s 00:10:42.894 12:54:46 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:42.894 12:54:46 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:42.894 12:54:46 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:42.894 12:54:46 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:42.894 12:54:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:42.894 12:54:46 -- common/autotest_common.sh@10 -- # set +x 00:10:42.894 12:54:46 -- json_config/json_config.sh@323 -- # killprocess 110226 00:10:42.894 12:54:46 -- common/autotest_common.sh@924 -- # '[' -z 110226 ']' 00:10:42.894 12:54:46 -- common/autotest_common.sh@928 -- # kill -0 110226 00:10:42.894 12:54:46 -- common/autotest_common.sh@929 -- # uname 00:10:42.894 12:54:47 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:42.894 12:54:47 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 110226 00:10:42.894 12:54:47 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:42.894 12:54:47 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:42.894 12:54:47 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 110226' 00:10:42.894 killing process with pid 110226 00:10:42.894 12:54:47 -- common/autotest_common.sh@943 -- # kill 110226 00:10:42.895 12:54:47 -- common/autotest_common.sh@948 -- # wait 110226 00:10:44.273 12:54:48 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:44.273 12:54:48 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:44.273 12:54:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:44.273 12:54:48 -- common/autotest_common.sh@10 -- # set +x 00:10:44.273 INFO: Success 00:10:44.273 12:54:48 -- json_config/json_config.sh@328 -- # return 0 00:10:44.273 12:54:48 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:44.273 00:10:44.273 real 0m13.826s 00:10:44.273 user 0m19.962s 00:10:44.273 sys 0m2.320s 00:10:44.273 12:54:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:44.273 12:54:48 -- common/autotest_common.sh@10 -- # set +x 00:10:44.273 ************************************ 00:10:44.273 END TEST json_config 00:10:44.273 ************************************ 00:10:44.273 12:54:48 -- spdk/autotest.sh@168 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:44.273 12:54:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:44.273 12:54:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:44.273 12:54:48 -- common/autotest_common.sh@10 -- # set +x 00:10:44.273 ************************************ 00:10:44.273 START TEST json_config_extra_key 00:10:44.273 ************************************ 00:10:44.273 12:54:48 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:44.273 12:54:48 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.273 12:54:48 -- nvmf/common.sh@7 -- # uname -s 00:10:44.273 12:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.273 12:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.273 12:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.273 12:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.273 12:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.273 12:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.273 12:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.273 12:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.273 12:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.273 12:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.273 12:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc392b05-2b32-4fd4-9037-bda36f489934 00:10:44.274 12:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=bc392b05-2b32-4fd4-9037-bda36f489934 00:10:44.274 12:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.274 12:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.274 12:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:44.274 12:54:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.274 12:54:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.274 12:54:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.274 12:54:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.274 12:54:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.274 12:54:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:44.274 12:54:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:44.274 12:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:44.274 12:54:48 -- paths/export.sh@5 -- # export PATH 00:10:44.274 12:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:44.274 12:54:48 -- nvmf/common.sh@47 -- # : 0 00:10:44.274 12:54:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.274 12:54:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.274 12:54:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.274 12:54:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.274 12:54:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.274 12:54:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.274 12:54:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.274 12:54:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:44.274 INFO: launching applications... 00:10:44.274 12:54:48 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:44.274 12:54:48 -- json_config/common.sh@9 -- # local app=target 00:10:44.274 12:54:48 -- json_config/common.sh@10 -- # shift 00:10:44.274 12:54:48 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:44.274 12:54:48 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:44.274 12:54:48 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:44.274 12:54:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:44.274 12:54:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:44.274 12:54:48 -- json_config/common.sh@22 -- # app_pid["$app"]=110425 00:10:44.274 12:54:48 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:44.274 Waiting for target to run... 00:10:44.274 12:54:48 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:44.274 12:54:48 -- json_config/common.sh@25 -- # waitforlisten 110425 /var/tmp/spdk_tgt.sock 00:10:44.274 12:54:48 -- common/autotest_common.sh@817 -- # '[' -z 110425 ']' 00:10:44.274 12:54:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:44.274 12:54:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:44.274 12:54:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:44.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:44.274 12:54:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:44.274 12:54:48 -- common/autotest_common.sh@10 -- # set +x 00:10:44.274 [2024-04-17 12:54:48.310035] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:44.274 [2024-04-17 12:54:48.310678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110425 ] 00:10:44.843 [2024-04-17 12:54:48.789113] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.843 [2024-04-17 12:54:48.969222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.780 00:10:45.780 INFO: shutting down applications... 00:10:45.780 12:54:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:45.780 12:54:49 -- common/autotest_common.sh@850 -- # return 0 00:10:45.780 12:54:49 -- json_config/common.sh@26 -- # echo '' 00:10:45.780 12:54:49 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:45.780 12:54:49 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:45.780 12:54:49 -- json_config/common.sh@31 -- # local app=target 00:10:45.780 12:54:49 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:45.780 12:54:49 -- json_config/common.sh@35 -- # [[ -n 110425 ]] 00:10:45.780 12:54:49 -- json_config/common.sh@38 -- # kill -SIGINT 110425 00:10:45.780 12:54:49 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:45.780 12:54:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:45.780 12:54:49 -- json_config/common.sh@41 -- # kill -0 110425 00:10:45.780 12:54:49 -- json_config/common.sh@45 -- # sleep 0.5 00:10:46.038 12:54:50 -- json_config/common.sh@40 -- # (( i++ )) 00:10:46.038 12:54:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:46.038 12:54:50 -- json_config/common.sh@41 -- # kill -0 110425 00:10:46.038 12:54:50 -- json_config/common.sh@45 -- # sleep 0.5 00:10:46.605 12:54:50 -- json_config/common.sh@40 -- # (( i++ )) 00:10:46.605 12:54:50 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:46.605 12:54:50 -- json_config/common.sh@41 -- # kill -0 110425 00:10:46.605 12:54:50 -- json_config/common.sh@45 -- # sleep 0.5 00:10:47.172 12:54:51 -- json_config/common.sh@40 -- # (( i++ )) 00:10:47.172 12:54:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:47.172 12:54:51 -- json_config/common.sh@41 -- # kill -0 110425 00:10:47.172 12:54:51 -- json_config/common.sh@45 -- # sleep 0.5 00:10:47.739 12:54:51 -- json_config/common.sh@40 -- # (( i++ )) 00:10:47.739 12:54:51 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:47.739 12:54:51 -- json_config/common.sh@41 -- # kill -0 110425 00:10:47.739 12:54:51 -- json_config/common.sh@45 -- # sleep 0.5 00:10:47.997 SPDK target shutdown done 00:10:47.997 Success 00:10:47.997 12:54:52 -- json_config/common.sh@40 -- # (( i++ )) 00:10:47.997 12:54:52 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:47.997 12:54:52 -- json_config/common.sh@41 -- # kill -0 110425 00:10:47.997 12:54:52 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:47.997 12:54:52 -- json_config/common.sh@43 -- # break 00:10:47.997 12:54:52 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:47.997 12:54:52 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:47.997 12:54:52 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:47.997 ************************************ 00:10:47.997 END TEST json_config_extra_key 00:10:47.997 ************************************ 00:10:47.997 00:10:47.997 real 0m3.946s 00:10:47.997 user 0m3.761s 00:10:47.997 sys 0m0.570s 00:10:47.997 12:54:52 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:47.997 12:54:52 -- common/autotest_common.sh@10 -- # set +x 00:10:47.997 12:54:52 -- spdk/autotest.sh@169 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:47.997 12:54:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:47.997 12:54:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:47.997 12:54:52 -- common/autotest_common.sh@10 -- # set +x 00:10:48.256 ************************************ 00:10:48.256 START TEST alias_rpc 00:10:48.256 ************************************ 00:10:48.256 12:54:52 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:48.256 * Looking for test storage... 00:10:48.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:48.256 12:54:52 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:48.256 12:54:52 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110541 00:10:48.256 12:54:52 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.256 12:54:52 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110541 00:10:48.256 12:54:52 -- common/autotest_common.sh@817 -- # '[' -z 110541 ']' 00:10:48.256 12:54:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.256 12:54:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.256 12:54:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.256 12:54:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.256 12:54:52 -- common/autotest_common.sh@10 -- # set +x 00:10:48.256 [2024-04-17 12:54:52.312669] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:48.256 [2024-04-17 12:54:52.313107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110541 ] 00:10:48.513 [2024-04-17 12:54:52.482735] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.770 [2024-04-17 12:54:52.689350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.336 12:54:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.336 12:54:53 -- common/autotest_common.sh@850 -- # return 0 00:10:49.336 12:54:53 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:49.903 12:54:53 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110541 00:10:49.903 12:54:53 -- common/autotest_common.sh@924 -- # '[' -z 110541 ']' 00:10:49.903 12:54:53 -- common/autotest_common.sh@928 -- # kill -0 110541 00:10:49.903 12:54:53 -- common/autotest_common.sh@929 -- # uname 00:10:49.903 12:54:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:49.903 12:54:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 110541 00:10:49.903 killing process with pid 110541 00:10:49.903 12:54:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:49.903 12:54:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:49.903 12:54:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 110541' 00:10:49.903 12:54:53 -- common/autotest_common.sh@943 -- # kill 110541 00:10:49.903 12:54:53 -- common/autotest_common.sh@948 -- # wait 110541 00:10:51.806 ************************************ 00:10:51.806 END TEST alias_rpc 00:10:51.806 ************************************ 00:10:51.806 00:10:51.806 real 0m3.746s 00:10:51.806 user 0m3.908s 00:10:51.806 sys 0m0.526s 00:10:51.806 12:54:55 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:51.806 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 12:54:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 0 ]] 00:10:51.806 12:54:55 -- spdk/autotest.sh@172 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:51.806 12:54:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:51.806 12:54:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:51.806 12:54:55 -- common/autotest_common.sh@10 -- # set +x 00:10:52.064 ************************************ 00:10:52.064 START TEST spdkcli_tcp 00:10:52.064 ************************************ 00:10:52.064 12:54:55 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:52.064 * Looking for test storage... 00:10:52.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:52.064 12:54:56 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:52.064 12:54:56 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:52.064 12:54:56 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:52.064 12:54:56 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:52.064 12:54:56 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:52.064 12:54:56 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:52.064 12:54:56 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:52.064 12:54:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:52.064 12:54:56 -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.065 12:54:56 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110669 00:10:52.065 12:54:56 -- spdkcli/tcp.sh@27 -- # waitforlisten 110669 00:10:52.065 12:54:56 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:52.065 12:54:56 -- common/autotest_common.sh@817 -- # '[' -z 110669 ']' 00:10:52.065 12:54:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.065 12:54:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:52.065 12:54:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.065 12:54:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:52.065 12:54:56 -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 [2024-04-17 12:54:56.135164] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:52.065 [2024-04-17 12:54:56.135615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110669 ] 00:10:52.322 [2024-04-17 12:54:56.302733] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.580 [2024-04-17 12:54:56.504662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.581 [2024-04-17 12:54:56.504664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.218 12:54:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:53.218 12:54:57 -- common/autotest_common.sh@850 -- # return 0 00:10:53.218 12:54:57 -- spdkcli/tcp.sh@31 -- # socat_pid=110696 00:10:53.218 12:54:57 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:53.218 12:54:57 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:53.479 [ 00:10:53.479 "spdk_get_version", 00:10:53.479 "rpc_get_methods", 00:10:53.479 "keyring_get_keys", 00:10:53.479 "trace_get_info", 00:10:53.479 "trace_get_tpoint_group_mask", 00:10:53.479 "trace_disable_tpoint_group", 00:10:53.479 "trace_enable_tpoint_group", 00:10:53.479 "trace_clear_tpoint_mask", 00:10:53.479 "trace_set_tpoint_mask", 00:10:53.479 "framework_get_pci_devices", 00:10:53.479 "framework_get_config", 00:10:53.479 "framework_get_subsystems", 00:10:53.479 "iobuf_get_stats", 00:10:53.479 "iobuf_set_options", 00:10:53.479 "sock_set_default_impl", 00:10:53.479 "sock_impl_set_options", 00:10:53.479 "sock_impl_get_options", 00:10:53.479 "vmd_rescan", 00:10:53.479 "vmd_remove_device", 00:10:53.479 "vmd_enable", 00:10:53.479 "accel_get_stats", 00:10:53.479 "accel_set_options", 00:10:53.479 "accel_set_driver", 00:10:53.479 "accel_crypto_key_destroy", 00:10:53.479 "accel_crypto_keys_get", 00:10:53.479 "accel_crypto_key_create", 00:10:53.479 "accel_assign_opc", 00:10:53.479 "accel_get_module_info", 00:10:53.479 "accel_get_opc_assignments", 00:10:53.479 "notify_get_notifications", 00:10:53.479 "notify_get_types", 00:10:53.479 "bdev_get_histogram", 00:10:53.479 "bdev_enable_histogram", 00:10:53.479 "bdev_set_qos_limit", 00:10:53.479 "bdev_set_qd_sampling_period", 00:10:53.479 "bdev_get_bdevs", 00:10:53.479 "bdev_reset_iostat", 00:10:53.479 "bdev_get_iostat", 00:10:53.479 "bdev_examine", 00:10:53.479 "bdev_wait_for_examine", 00:10:53.479 "bdev_set_options", 00:10:53.479 "scsi_get_devices", 00:10:53.479 "thread_set_cpumask", 00:10:53.479 "framework_get_scheduler", 00:10:53.479 "framework_set_scheduler", 00:10:53.479 "framework_get_reactors", 00:10:53.479 "thread_get_io_channels", 00:10:53.479 "thread_get_pollers", 00:10:53.479 "thread_get_stats", 00:10:53.479 "framework_monitor_context_switch", 00:10:53.479 "spdk_kill_instance", 00:10:53.479 "log_enable_timestamps", 00:10:53.479 "log_get_flags", 00:10:53.479 "log_clear_flag", 00:10:53.479 "log_set_flag", 00:10:53.479 "log_get_level", 00:10:53.479 "log_set_level", 00:10:53.479 "log_get_print_level", 00:10:53.479 "log_set_print_level", 00:10:53.479 "framework_enable_cpumask_locks", 00:10:53.479 "framework_disable_cpumask_locks", 00:10:53.479 "framework_wait_init", 00:10:53.479 "framework_start_init", 00:10:53.479 "virtio_blk_create_transport", 00:10:53.479 "virtio_blk_get_transports", 00:10:53.479 "vhost_controller_set_coalescing", 00:10:53.479 "vhost_get_controllers", 00:10:53.479 "vhost_delete_controller", 00:10:53.479 "vhost_create_blk_controller", 00:10:53.479 "vhost_scsi_controller_remove_target", 00:10:53.479 "vhost_scsi_controller_add_target", 00:10:53.479 "vhost_start_scsi_controller", 00:10:53.479 "vhost_create_scsi_controller", 00:10:53.479 "nbd_get_disks", 00:10:53.479 "nbd_stop_disk", 00:10:53.479 "nbd_start_disk", 00:10:53.479 "env_dpdk_get_mem_stats", 00:10:53.479 "nvmf_subsystem_get_listeners", 00:10:53.479 "nvmf_subsystem_get_qpairs", 00:10:53.479 "nvmf_subsystem_get_controllers", 00:10:53.479 "nvmf_get_stats", 00:10:53.479 "nvmf_get_transports", 00:10:53.479 "nvmf_create_transport", 00:10:53.479 "nvmf_get_targets", 00:10:53.479 "nvmf_delete_target", 00:10:53.479 "nvmf_create_target", 00:10:53.479 "nvmf_subsystem_allow_any_host", 00:10:53.479 "nvmf_subsystem_remove_host", 00:10:53.479 "nvmf_subsystem_add_host", 00:10:53.479 "nvmf_ns_remove_host", 00:10:53.479 "nvmf_ns_add_host", 00:10:53.479 "nvmf_subsystem_remove_ns", 00:10:53.479 "nvmf_subsystem_add_ns", 00:10:53.479 "nvmf_subsystem_listener_set_ana_state", 00:10:53.479 "nvmf_discovery_get_referrals", 00:10:53.479 "nvmf_discovery_remove_referral", 00:10:53.479 "nvmf_discovery_add_referral", 00:10:53.479 "nvmf_subsystem_remove_listener", 00:10:53.479 "nvmf_subsystem_add_listener", 00:10:53.479 "nvmf_delete_subsystem", 00:10:53.479 "nvmf_create_subsystem", 00:10:53.479 "nvmf_get_subsystems", 00:10:53.479 "nvmf_set_crdt", 00:10:53.479 "nvmf_set_config", 00:10:53.479 "nvmf_set_max_subsystems", 00:10:53.479 "iscsi_set_options", 00:10:53.479 "iscsi_get_auth_groups", 00:10:53.479 "iscsi_auth_group_remove_secret", 00:10:53.479 "iscsi_auth_group_add_secret", 00:10:53.479 "iscsi_delete_auth_group", 00:10:53.479 "iscsi_create_auth_group", 00:10:53.479 "iscsi_set_discovery_auth", 00:10:53.479 "iscsi_get_options", 00:10:53.479 "iscsi_target_node_request_logout", 00:10:53.479 "iscsi_target_node_set_redirect", 00:10:53.479 "iscsi_target_node_set_auth", 00:10:53.479 "iscsi_target_node_add_lun", 00:10:53.479 "iscsi_get_stats", 00:10:53.479 "iscsi_get_connections", 00:10:53.479 "iscsi_portal_group_set_auth", 00:10:53.479 "iscsi_start_portal_group", 00:10:53.479 "iscsi_delete_portal_group", 00:10:53.479 "iscsi_create_portal_group", 00:10:53.479 "iscsi_get_portal_groups", 00:10:53.479 "iscsi_delete_target_node", 00:10:53.479 "iscsi_target_node_remove_pg_ig_maps", 00:10:53.479 "iscsi_target_node_add_pg_ig_maps", 00:10:53.479 "iscsi_create_target_node", 00:10:53.479 "iscsi_get_target_nodes", 00:10:53.479 "iscsi_delete_initiator_group", 00:10:53.479 "iscsi_initiator_group_remove_initiators", 00:10:53.479 "iscsi_initiator_group_add_initiators", 00:10:53.479 "iscsi_create_initiator_group", 00:10:53.479 "iscsi_get_initiator_groups", 00:10:53.479 "keyring_linux_set_options", 00:10:53.479 "keyring_file_remove_key", 00:10:53.479 "keyring_file_add_key", 00:10:53.479 "iaa_scan_accel_module", 00:10:53.479 "dsa_scan_accel_module", 00:10:53.479 "ioat_scan_accel_module", 00:10:53.479 "accel_error_inject_error", 00:10:53.479 "bdev_iscsi_delete", 00:10:53.479 "bdev_iscsi_create", 00:10:53.479 "bdev_iscsi_set_options", 00:10:53.479 "bdev_virtio_attach_controller", 00:10:53.479 "bdev_virtio_scsi_get_devices", 00:10:53.479 "bdev_virtio_detach_controller", 00:10:53.479 "bdev_virtio_blk_set_hotplug", 00:10:53.479 "bdev_ftl_set_property", 00:10:53.479 "bdev_ftl_get_properties", 00:10:53.479 "bdev_ftl_get_stats", 00:10:53.479 "bdev_ftl_unmap", 00:10:53.479 "bdev_ftl_unload", 00:10:53.479 "bdev_ftl_delete", 00:10:53.479 "bdev_ftl_load", 00:10:53.479 "bdev_ftl_create", 00:10:53.479 "bdev_aio_delete", 00:10:53.479 "bdev_aio_rescan", 00:10:53.479 "bdev_aio_create", 00:10:53.479 "blobfs_create", 00:10:53.479 "blobfs_detect", 00:10:53.479 "blobfs_set_cache_size", 00:10:53.479 "bdev_zone_block_delete", 00:10:53.479 "bdev_zone_block_create", 00:10:53.479 "bdev_delay_delete", 00:10:53.479 "bdev_delay_create", 00:10:53.479 "bdev_delay_update_latency", 00:10:53.479 "bdev_split_delete", 00:10:53.479 "bdev_split_create", 00:10:53.479 "bdev_error_inject_error", 00:10:53.479 "bdev_error_delete", 00:10:53.479 "bdev_error_create", 00:10:53.479 "bdev_raid_set_options", 00:10:53.479 "bdev_raid_remove_base_bdev", 00:10:53.479 "bdev_raid_add_base_bdev", 00:10:53.480 "bdev_raid_delete", 00:10:53.480 "bdev_raid_create", 00:10:53.480 "bdev_raid_get_bdevs", 00:10:53.480 "bdev_lvol_grow_lvstore", 00:10:53.480 "bdev_lvol_get_lvols", 00:10:53.480 "bdev_lvol_get_lvstores", 00:10:53.480 "bdev_lvol_delete", 00:10:53.480 "bdev_lvol_set_read_only", 00:10:53.480 "bdev_lvol_resize", 00:10:53.480 "bdev_lvol_decouple_parent", 00:10:53.480 "bdev_lvol_inflate", 00:10:53.480 "bdev_lvol_rename", 00:10:53.480 "bdev_lvol_clone_bdev", 00:10:53.480 "bdev_lvol_clone", 00:10:53.480 "bdev_lvol_snapshot", 00:10:53.480 "bdev_lvol_create", 00:10:53.480 "bdev_lvol_delete_lvstore", 00:10:53.480 "bdev_lvol_rename_lvstore", 00:10:53.480 "bdev_lvol_create_lvstore", 00:10:53.480 "bdev_passthru_delete", 00:10:53.480 "bdev_passthru_create", 00:10:53.480 "bdev_nvme_cuse_unregister", 00:10:53.480 "bdev_nvme_cuse_register", 00:10:53.480 "bdev_opal_new_user", 00:10:53.480 "bdev_opal_set_lock_state", 00:10:53.480 "bdev_opal_delete", 00:10:53.480 "bdev_opal_get_info", 00:10:53.480 "bdev_opal_create", 00:10:53.480 "bdev_nvme_opal_revert", 00:10:53.480 "bdev_nvme_opal_init", 00:10:53.480 "bdev_nvme_send_cmd", 00:10:53.480 "bdev_nvme_get_path_iostat", 00:10:53.480 "bdev_nvme_get_mdns_discovery_info", 00:10:53.480 "bdev_nvme_stop_mdns_discovery", 00:10:53.480 "bdev_nvme_start_mdns_discovery", 00:10:53.480 "bdev_nvme_set_multipath_policy", 00:10:53.480 "bdev_nvme_set_preferred_path", 00:10:53.480 "bdev_nvme_get_io_paths", 00:10:53.480 "bdev_nvme_remove_error_injection", 00:10:53.480 "bdev_nvme_add_error_injection", 00:10:53.480 "bdev_nvme_get_discovery_info", 00:10:53.480 "bdev_nvme_stop_discovery", 00:10:53.480 "bdev_nvme_start_discovery", 00:10:53.480 "bdev_nvme_get_controller_health_info", 00:10:53.480 "bdev_nvme_disable_controller", 00:10:53.480 "bdev_nvme_enable_controller", 00:10:53.480 "bdev_nvme_reset_controller", 00:10:53.480 "bdev_nvme_get_transport_statistics", 00:10:53.480 "bdev_nvme_apply_firmware", 00:10:53.480 "bdev_nvme_detach_controller", 00:10:53.480 "bdev_nvme_get_controllers", 00:10:53.480 "bdev_nvme_attach_controller", 00:10:53.480 "bdev_nvme_set_hotplug", 00:10:53.480 "bdev_nvme_set_options", 00:10:53.480 "bdev_null_resize", 00:10:53.480 "bdev_null_delete", 00:10:53.480 "bdev_null_create", 00:10:53.480 "bdev_malloc_delete", 00:10:53.480 "bdev_malloc_create" 00:10:53.480 ] 00:10:53.480 12:54:57 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:53.480 12:54:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:53.480 12:54:57 -- common/autotest_common.sh@10 -- # set +x 00:10:53.480 12:54:57 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:53.480 12:54:57 -- spdkcli/tcp.sh@38 -- # killprocess 110669 00:10:53.480 12:54:57 -- common/autotest_common.sh@924 -- # '[' -z 110669 ']' 00:10:53.480 12:54:57 -- common/autotest_common.sh@928 -- # kill -0 110669 00:10:53.480 12:54:57 -- common/autotest_common.sh@929 -- # uname 00:10:53.480 12:54:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:53.480 12:54:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 110669 00:10:53.739 killing process with pid 110669 00:10:53.739 12:54:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:53.739 12:54:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:53.739 12:54:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 110669' 00:10:53.739 12:54:57 -- common/autotest_common.sh@943 -- # kill 110669 00:10:53.739 12:54:57 -- common/autotest_common.sh@948 -- # wait 110669 00:10:55.643 ************************************ 00:10:55.643 END TEST spdkcli_tcp 00:10:55.643 ************************************ 00:10:55.643 00:10:55.643 real 0m3.798s 00:10:55.643 user 0m6.858s 00:10:55.643 sys 0m0.525s 00:10:55.643 12:54:59 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:55.643 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:10:55.901 12:54:59 -- spdk/autotest.sh@175 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:55.901 12:54:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:55.901 12:54:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:55.902 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 ************************************ 00:10:55.902 START TEST dpdk_mem_utility 00:10:55.902 ************************************ 00:10:55.902 12:54:59 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:55.902 * Looking for test storage... 00:10:55.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:55.902 12:54:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:55.902 12:54:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110793 00:10:55.902 12:54:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:55.902 12:54:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110793 00:10:55.902 12:54:59 -- common/autotest_common.sh@817 -- # '[' -z 110793 ']' 00:10:55.902 12:54:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.902 12:54:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:55.902 12:54:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.902 12:54:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:55.902 12:54:59 -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 [2024-04-17 12:55:00.009980] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:55.902 [2024-04-17 12:55:00.010422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110793 ] 00:10:56.160 [2024-04-17 12:55:00.177903] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.502 [2024-04-17 12:55:00.388032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.074 12:55:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.074 12:55:01 -- common/autotest_common.sh@850 -- # return 0 00:10:57.074 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:57.074 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:57.074 12:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.074 12:55:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.074 { 00:10:57.074 "filename": "/tmp/spdk_mem_dump.txt" 00:10:57.074 } 00:10:57.074 12:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.074 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:57.074 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:57.074 1 heaps totaling size 820.000000 MiB 00:10:57.074 size: 820.000000 MiB heap id: 0 00:10:57.074 end heaps---------- 00:10:57.074 8 mempools totaling size 598.116089 MiB 00:10:57.074 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:57.074 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:57.074 size: 84.521057 MiB name: bdev_io_110793 00:10:57.074 size: 51.011292 MiB name: evtpool_110793 00:10:57.074 size: 50.003479 MiB name: msgpool_110793 00:10:57.074 size: 21.763794 MiB name: PDU_Pool 00:10:57.074 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:57.074 size: 0.026123 MiB name: Session_Pool 00:10:57.074 end mempools------- 00:10:57.074 6 memzones totaling size 4.142822 MiB 00:10:57.075 size: 1.000366 MiB name: RG_ring_0_110793 00:10:57.075 size: 1.000366 MiB name: RG_ring_1_110793 00:10:57.075 size: 1.000366 MiB name: RG_ring_4_110793 00:10:57.075 size: 1.000366 MiB name: RG_ring_5_110793 00:10:57.075 size: 0.125366 MiB name: RG_ring_2_110793 00:10:57.075 size: 0.015991 MiB name: RG_ring_3_110793 00:10:57.075 end memzones------- 00:10:57.075 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:57.335 heap id: 0 total size: 820.000000 MiB number of busy elements: 223 number of free elements: 18 00:10:57.335 list of free elements. size: 18.470459 MiB 00:10:57.335 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:57.335 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:57.335 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:57.335 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:57.335 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:57.335 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:57.335 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:57.335 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:57.335 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:57.335 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:57.335 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:57.335 element at address: 0x200000200000 with size: 0.835083 MiB 00:10:57.335 element at address: 0x20001b000000 with size: 0.561218 MiB 00:10:57.335 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:57.335 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:57.335 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:57.335 element at address: 0x200028400000 with size: 0.399475 MiB 00:10:57.335 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:57.335 list of standard malloc elements. size: 199.265137 MiB 00:10:57.335 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:57.335 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:57.335 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:57.335 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:57.335 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:57.335 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:57.335 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:57.335 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:57.335 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:57.335 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:57.335 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:57.335 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:57.335 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:57.335 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:57.336 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:57.336 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:57.336 element at address: 0x200028466440 with size: 0.000244 MiB 00:10:57.336 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d200 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d480 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:57.336 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:57.336 list of memzone associated elements. size: 602.264404 MiB 00:10:57.336 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:57.336 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:57.336 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:57.337 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:57.337 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:57.337 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_110793_0 00:10:57.337 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:57.337 associated memzone info: size: 48.002930 MiB name: MP_evtpool_110793_0 00:10:57.337 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:57.337 associated memzone info: size: 48.002930 MiB name: MP_msgpool_110793_0 00:10:57.337 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:57.337 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:57.337 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:57.337 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:57.337 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:57.337 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_110793 00:10:57.337 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:57.337 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_110793 00:10:57.337 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:57.337 associated memzone info: size: 1.007996 MiB name: MP_evtpool_110793 00:10:57.337 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:57.337 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:57.337 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:57.337 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:57.337 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:57.337 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:57.337 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:57.337 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:57.337 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:57.337 associated memzone info: size: 1.000366 MiB name: RG_ring_0_110793 00:10:57.337 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:57.337 associated memzone info: size: 1.000366 MiB name: RG_ring_1_110793 00:10:57.337 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:57.337 associated memzone info: size: 1.000366 MiB name: RG_ring_4_110793 00:10:57.337 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:57.337 associated memzone info: size: 1.000366 MiB name: RG_ring_5_110793 00:10:57.337 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:57.337 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_110793 00:10:57.337 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:57.337 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:57.337 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:57.337 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:57.337 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:57.337 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:57.337 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:57.337 associated memzone info: size: 0.125366 MiB name: RG_ring_2_110793 00:10:57.337 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:57.337 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:57.337 element at address: 0x200028466640 with size: 0.023804 MiB 00:10:57.337 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:57.337 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:57.337 associated memzone info: size: 0.015991 MiB name: RG_ring_3_110793 00:10:57.337 element at address: 0x20002846c7c0 with size: 0.002502 MiB 00:10:57.337 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:57.337 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:57.337 associated memzone info: size: 0.000183 MiB name: MP_msgpool_110793 00:10:57.337 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:57.337 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_110793 00:10:57.337 element at address: 0x20002846d300 with size: 0.000366 MiB 00:10:57.337 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:57.337 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:57.337 12:55:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110793 00:10:57.337 12:55:01 -- common/autotest_common.sh@924 -- # '[' -z 110793 ']' 00:10:57.337 12:55:01 -- common/autotest_common.sh@928 -- # kill -0 110793 00:10:57.337 12:55:01 -- common/autotest_common.sh@929 -- # uname 00:10:57.337 12:55:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:10:57.337 12:55:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 110793 00:10:57.337 12:55:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:10:57.337 12:55:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:10:57.337 12:55:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 110793' 00:10:57.337 killing process with pid 110793 00:10:57.337 12:55:01 -- common/autotest_common.sh@943 -- # kill 110793 00:10:57.337 12:55:01 -- common/autotest_common.sh@948 -- # wait 110793 00:10:59.871 ************************************ 00:10:59.871 END TEST dpdk_mem_utility 00:10:59.871 ************************************ 00:10:59.871 00:10:59.871 real 0m3.614s 00:10:59.871 user 0m3.691s 00:10:59.871 sys 0m0.583s 00:10:59.871 12:55:03 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:10:59.871 12:55:03 -- common/autotest_common.sh@10 -- # set +x 00:10:59.871 12:55:03 -- spdk/autotest.sh@176 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:59.871 12:55:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:10:59.871 12:55:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:59.871 12:55:03 -- common/autotest_common.sh@10 -- # set +x 00:10:59.871 ************************************ 00:10:59.871 START TEST event 00:10:59.871 ************************************ 00:10:59.871 12:55:03 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:59.871 * Looking for test storage... 00:10:59.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:59.871 12:55:03 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:59.871 12:55:03 -- bdev/nbd_common.sh@6 -- # set -e 00:10:59.871 12:55:03 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:59.871 12:55:03 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:10:59.871 12:55:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:59.871 12:55:03 -- common/autotest_common.sh@10 -- # set +x 00:10:59.871 ************************************ 00:10:59.871 START TEST event_perf 00:10:59.871 ************************************ 00:10:59.871 12:55:03 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:59.871 Running I/O for 1 seconds...[2024-04-17 12:55:03.709267] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:10:59.871 [2024-04-17 12:55:03.709581] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110920 ] 00:10:59.871 [2024-04-17 12:55:03.890963] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.130 [2024-04-17 12:55:04.102304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.130 [2024-04-17 12:55:04.102406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.130 [2024-04-17 12:55:04.102547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.130 [2024-04-17 12:55:04.102546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.507 Running I/O for 1 seconds... 00:11:01.507 lcore 0: 181722 00:11:01.507 lcore 1: 181720 00:11:01.507 lcore 2: 181719 00:11:01.507 lcore 3: 181720 00:11:01.507 done. 00:11:01.507 ************************************ 00:11:01.507 END TEST event_perf 00:11:01.507 ************************************ 00:11:01.507 00:11:01.507 real 0m1.830s 00:11:01.507 user 0m4.568s 00:11:01.507 sys 0m0.132s 00:11:01.507 12:55:05 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:01.507 12:55:05 -- common/autotest_common.sh@10 -- # set +x 00:11:01.507 12:55:05 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:01.507 12:55:05 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:01.507 12:55:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:01.507 12:55:05 -- common/autotest_common.sh@10 -- # set +x 00:11:01.507 ************************************ 00:11:01.507 START TEST event_reactor 00:11:01.507 ************************************ 00:11:01.507 12:55:05 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:01.507 [2024-04-17 12:55:05.630490] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:01.507 [2024-04-17 12:55:05.630939] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110990 ] 00:11:01.766 [2024-04-17 12:55:05.801524] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.026 [2024-04-17 12:55:06.052929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.405 test_start 00:11:03.405 oneshot 00:11:03.405 tick 100 00:11:03.405 tick 100 00:11:03.405 tick 250 00:11:03.405 tick 100 00:11:03.405 tick 100 00:11:03.405 tick 100 00:11:03.405 tick 250 00:11:03.405 tick 500 00:11:03.405 tick 100 00:11:03.405 tick 100 00:11:03.405 tick 250 00:11:03.405 tick 100 00:11:03.405 tick 100 00:11:03.405 test_end 00:11:03.405 00:11:03.405 real 0m1.860s 00:11:03.405 user 0m1.634s 00:11:03.405 sys 0m0.120s 00:11:03.405 12:55:07 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:03.405 12:55:07 -- common/autotest_common.sh@10 -- # set +x 00:11:03.405 ************************************ 00:11:03.405 END TEST event_reactor 00:11:03.405 ************************************ 00:11:03.405 12:55:07 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:03.405 12:55:07 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:03.405 12:55:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:03.405 12:55:07 -- common/autotest_common.sh@10 -- # set +x 00:11:03.405 ************************************ 00:11:03.405 START TEST event_reactor_perf 00:11:03.405 ************************************ 00:11:03.405 12:55:07 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:03.664 [2024-04-17 12:55:07.567405] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:03.664 [2024-04-17 12:55:07.568099] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111044 ] 00:11:03.664 [2024-04-17 12:55:07.740607] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.924 [2024-04-17 12:55:07.955026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.300 test_start 00:11:05.300 test_end 00:11:05.300 Performance: 315134 events per second 00:11:05.300 00:11:05.300 real 0m1.857s 00:11:05.300 user 0m1.646s 00:11:05.300 sys 0m0.108s 00:11:05.300 12:55:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:05.300 12:55:09 -- common/autotest_common.sh@10 -- # set +x 00:11:05.300 ************************************ 00:11:05.300 END TEST event_reactor_perf 00:11:05.300 ************************************ 00:11:05.300 12:55:09 -- event/event.sh@49 -- # uname -s 00:11:05.300 12:55:09 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:05.300 12:55:09 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:05.300 12:55:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:05.300 12:55:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:05.300 12:55:09 -- common/autotest_common.sh@10 -- # set +x 00:11:05.559 ************************************ 00:11:05.559 START TEST event_scheduler 00:11:05.559 ************************************ 00:11:05.559 12:55:09 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:05.559 * Looking for test storage... 00:11:05.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:05.559 12:55:09 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:05.559 12:55:09 -- scheduler/scheduler.sh@35 -- # scheduler_pid=111129 00:11:05.559 12:55:09 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:05.559 12:55:09 -- scheduler/scheduler.sh@37 -- # waitforlisten 111129 00:11:05.559 12:55:09 -- common/autotest_common.sh@817 -- # '[' -z 111129 ']' 00:11:05.559 12:55:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.559 12:55:09 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:05.559 12:55:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:05.559 12:55:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.559 12:55:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:05.559 12:55:09 -- common/autotest_common.sh@10 -- # set +x 00:11:05.559 [2024-04-17 12:55:09.622338] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:05.559 [2024-04-17 12:55:09.622900] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111129 ] 00:11:05.827 [2024-04-17 12:55:09.817707] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.098 [2024-04-17 12:55:10.068162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.098 [2024-04-17 12:55:10.068247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.098 [2024-04-17 12:55:10.068364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.098 [2024-04-17 12:55:10.068361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.666 12:55:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:06.666 12:55:10 -- common/autotest_common.sh@850 -- # return 0 00:11:06.666 12:55:10 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:06.666 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.666 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.666 POWER: Env isn't set yet! 00:11:06.666 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:06.666 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:06.666 POWER: Cannot set governor of lcore 0 to userspace 00:11:06.666 POWER: Attempting to initialise PSTAT power management... 00:11:06.666 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:06.666 POWER: Cannot set governor of lcore 0 to performance 00:11:06.666 POWER: Attempting to initialise AMD PSTATE power management... 00:11:06.666 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:06.666 POWER: Cannot set governor of lcore 0 to userspace 00:11:06.666 POWER: Attempting to initialise CPPC power management... 00:11:06.666 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:06.666 POWER: Cannot set governor of lcore 0 to userspace 00:11:06.666 POWER: Attempting to initialise VM power management... 00:11:06.666 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:06.666 POWER: Unable to set Power Management Environment for lcore 0 00:11:06.666 [2024-04-17 12:55:10.568132] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:06.666 [2024-04-17 12:55:10.568299] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:06.666 [2024-04-17 12:55:10.568351] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:06.666 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.666 12:55:10 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:06.666 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.666 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 [2024-04-17 12:55:10.874659] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:06.925 12:55:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:06.925 12:55:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 ************************************ 00:11:06.925 START TEST scheduler_create_thread 00:11:06.925 ************************************ 00:11:06.925 12:55:10 -- common/autotest_common.sh@1099 -- # scheduler_create_thread 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 2 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 3 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 4 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 5 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 6 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 7 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 8 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 9 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 10 00:11:06.925 12:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:10 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:06.925 12:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 12:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:06.925 12:55:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:06.925 12:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:11:06.925 12:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.925 12:55:11 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:06.925 12:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.925 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:11:07.862 12:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:07.862 12:55:11 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:07.862 12:55:11 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:07.862 12:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:07.862 12:55:11 -- common/autotest_common.sh@10 -- # set +x 00:11:09.236 12:55:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.236 ************************************ 00:11:09.236 END TEST scheduler_create_thread 00:11:09.236 ************************************ 00:11:09.236 00:11:09.236 real 0m2.155s 00:11:09.236 user 0m0.009s 00:11:09.236 sys 0m0.000s 00:11:09.236 12:55:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:09.236 12:55:13 -- common/autotest_common.sh@10 -- # set +x 00:11:09.236 12:55:13 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:09.236 12:55:13 -- scheduler/scheduler.sh@46 -- # killprocess 111129 00:11:09.236 12:55:13 -- common/autotest_common.sh@924 -- # '[' -z 111129 ']' 00:11:09.236 12:55:13 -- common/autotest_common.sh@928 -- # kill -0 111129 00:11:09.236 12:55:13 -- common/autotest_common.sh@929 -- # uname 00:11:09.236 12:55:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:09.236 12:55:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 111129 00:11:09.236 12:55:13 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:11:09.236 killing process with pid 111129 00:11:09.236 12:55:13 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:11:09.236 12:55:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 111129' 00:11:09.236 12:55:13 -- common/autotest_common.sh@943 -- # kill 111129 00:11:09.236 12:55:13 -- common/autotest_common.sh@948 -- # wait 111129 00:11:09.495 [2024-04-17 12:55:13.547688] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:10.873 ************************************ 00:11:10.873 END TEST event_scheduler 00:11:10.873 ************************************ 00:11:10.873 00:11:10.873 real 0m5.253s 00:11:10.873 user 0m8.528s 00:11:10.873 sys 0m0.457s 00:11:10.873 12:55:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:10.873 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:11:10.873 12:55:14 -- event/event.sh@51 -- # modprobe -n nbd 00:11:10.873 12:55:14 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:10.873 12:55:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:10.873 12:55:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:10.873 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:11:10.873 ************************************ 00:11:10.873 START TEST app_repeat 00:11:10.873 ************************************ 00:11:10.873 12:55:14 -- common/autotest_common.sh@1099 -- # app_repeat_test 00:11:10.873 12:55:14 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.873 12:55:14 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:11:10.873 12:55:14 -- event/event.sh@13 -- # local nbd_list 00:11:10.873 12:55:14 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:11:10.873 12:55:14 -- event/event.sh@14 -- # local bdev_list 00:11:10.873 12:55:14 -- event/event.sh@15 -- # local repeat_times=4 00:11:10.873 12:55:14 -- event/event.sh@17 -- # modprobe nbd 00:11:10.873 Process app_repeat pid: 111276 00:11:10.873 spdk_app_start Round 0 00:11:10.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:10.873 12:55:14 -- event/event.sh@19 -- # repeat_pid=111276 00:11:10.873 12:55:14 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:10.873 12:55:14 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:10.873 12:55:14 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 111276' 00:11:10.873 12:55:14 -- event/event.sh@23 -- # for i in {0..2} 00:11:10.873 12:55:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:10.873 12:55:14 -- event/event.sh@25 -- # waitforlisten 111276 /var/tmp/spdk-nbd.sock 00:11:10.873 12:55:14 -- common/autotest_common.sh@817 -- # '[' -z 111276 ']' 00:11:10.873 12:55:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:10.873 12:55:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:10.873 12:55:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:10.873 12:55:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:10.873 12:55:14 -- common/autotest_common.sh@10 -- # set +x 00:11:10.873 [2024-04-17 12:55:14.842961] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:10.873 [2024-04-17 12:55:14.843365] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111276 ] 00:11:10.873 [2024-04-17 12:55:15.013793] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:11.131 [2024-04-17 12:55:15.243113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.131 [2024-04-17 12:55:15.243124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.699 12:55:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.699 12:55:15 -- common/autotest_common.sh@850 -- # return 0 00:11:11.699 12:55:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:12.268 Malloc0 00:11:12.268 12:55:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:12.268 Malloc1 00:11:12.268 12:55:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@12 -- # local i 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.268 12:55:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:12.527 /dev/nbd0 00:11:12.786 12:55:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.786 12:55:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.786 12:55:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:12.786 12:55:16 -- common/autotest_common.sh@855 -- # local i 00:11:12.786 12:55:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:12.786 12:55:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:12.786 12:55:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:12.786 12:55:16 -- common/autotest_common.sh@859 -- # break 00:11:12.786 12:55:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:12.786 12:55:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:12.786 12:55:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:12.786 1+0 records in 00:11:12.786 1+0 records out 00:11:12.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487296 s, 8.4 MB/s 00:11:12.786 12:55:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.786 12:55:16 -- common/autotest_common.sh@872 -- # size=4096 00:11:12.786 12:55:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:12.786 12:55:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:12.786 12:55:16 -- common/autotest_common.sh@875 -- # return 0 00:11:12.786 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.786 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:12.786 12:55:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:13.045 /dev/nbd1 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:13.045 12:55:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:13.045 12:55:16 -- common/autotest_common.sh@855 -- # local i 00:11:13.045 12:55:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:13.045 12:55:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:13.045 12:55:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:13.045 12:55:16 -- common/autotest_common.sh@859 -- # break 00:11:13.045 12:55:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:13.045 12:55:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:13.045 12:55:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:13.045 1+0 records in 00:11:13.045 1+0 records out 00:11:13.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000859625 s, 4.8 MB/s 00:11:13.045 12:55:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:13.045 12:55:16 -- common/autotest_common.sh@872 -- # size=4096 00:11:13.045 12:55:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:13.045 12:55:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:13.045 12:55:16 -- common/autotest_common.sh@875 -- # return 0 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.045 12:55:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.045 12:55:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:13.045 { 00:11:13.045 "nbd_device": "/dev/nbd0", 00:11:13.045 "bdev_name": "Malloc0" 00:11:13.045 }, 00:11:13.045 { 00:11:13.045 "nbd_device": "/dev/nbd1", 00:11:13.045 "bdev_name": "Malloc1" 00:11:13.045 } 00:11:13.045 ]' 00:11:13.045 12:55:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:13.045 { 00:11:13.045 "nbd_device": "/dev/nbd0", 00:11:13.045 "bdev_name": "Malloc0" 00:11:13.045 }, 00:11:13.045 { 00:11:13.045 "nbd_device": "/dev/nbd1", 00:11:13.045 "bdev_name": "Malloc1" 00:11:13.045 } 00:11:13.045 ]' 00:11:13.045 12:55:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:13.303 /dev/nbd1' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:13.303 /dev/nbd1' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@65 -- # count=2 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@95 -- # count=2 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:13.303 256+0 records in 00:11:13.303 256+0 records out 00:11:13.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00781579 s, 134 MB/s 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:13.303 256+0 records in 00:11:13.303 256+0 records out 00:11:13.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217992 s, 48.1 MB/s 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:13.303 256+0 records in 00:11:13.303 256+0 records out 00:11:13.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274701 s, 38.2 MB/s 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@51 -- # local i 00:11:13.303 12:55:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.304 12:55:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@41 -- # break 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.562 12:55:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:13.821 12:55:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@41 -- # break 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.079 12:55:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@65 -- # true 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@65 -- # count=0 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@104 -- # count=0 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:14.337 12:55:18 -- bdev/nbd_common.sh@109 -- # return 0 00:11:14.337 12:55:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:14.945 12:55:18 -- event/event.sh@35 -- # sleep 3 00:11:15.882 [2024-04-17 12:55:20.002567] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:16.141 [2024-04-17 12:55:20.213568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.141 [2024-04-17 12:55:20.213575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.400 [2024-04-17 12:55:20.401472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:16.400 [2024-04-17 12:55:20.401832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:17.777 12:55:21 -- event/event.sh@23 -- # for i in {0..2} 00:11:17.777 spdk_app_start Round 1 00:11:17.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:17.777 12:55:21 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:17.777 12:55:21 -- event/event.sh@25 -- # waitforlisten 111276 /var/tmp/spdk-nbd.sock 00:11:17.777 12:55:21 -- common/autotest_common.sh@817 -- # '[' -z 111276 ']' 00:11:17.777 12:55:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:17.777 12:55:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.777 12:55:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:17.777 12:55:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.777 12:55:21 -- common/autotest_common.sh@10 -- # set +x 00:11:18.036 12:55:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.036 12:55:22 -- common/autotest_common.sh@850 -- # return 0 00:11:18.036 12:55:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.295 Malloc0 00:11:18.295 12:55:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:18.554 Malloc1 00:11:18.554 12:55:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@12 -- # local i 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:18.554 12:55:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:18.812 /dev/nbd0 00:11:18.812 12:55:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:19.069 12:55:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:19.069 12:55:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:19.069 12:55:22 -- common/autotest_common.sh@855 -- # local i 00:11:19.069 12:55:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:19.069 12:55:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:19.069 12:55:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:19.069 12:55:22 -- common/autotest_common.sh@859 -- # break 00:11:19.069 12:55:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:19.069 12:55:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:19.069 12:55:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.069 1+0 records in 00:11:19.069 1+0 records out 00:11:19.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406489 s, 10.1 MB/s 00:11:19.069 12:55:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.069 12:55:22 -- common/autotest_common.sh@872 -- # size=4096 00:11:19.069 12:55:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.069 12:55:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:19.069 12:55:22 -- common/autotest_common.sh@875 -- # return 0 00:11:19.069 12:55:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.069 12:55:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.070 12:55:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:19.070 /dev/nbd1 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:19.328 12:55:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:19.328 12:55:23 -- common/autotest_common.sh@855 -- # local i 00:11:19.328 12:55:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:19.328 12:55:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:19.328 12:55:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:19.328 12:55:23 -- common/autotest_common.sh@859 -- # break 00:11:19.328 12:55:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:19.328 12:55:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:19.328 12:55:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:19.328 1+0 records in 00:11:19.328 1+0 records out 00:11:19.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366667 s, 11.2 MB/s 00:11:19.328 12:55:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.328 12:55:23 -- common/autotest_common.sh@872 -- # size=4096 00:11:19.328 12:55:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:19.328 12:55:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:19.328 12:55:23 -- common/autotest_common.sh@875 -- # return 0 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.328 12:55:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:19.586 { 00:11:19.586 "nbd_device": "/dev/nbd0", 00:11:19.586 "bdev_name": "Malloc0" 00:11:19.586 }, 00:11:19.586 { 00:11:19.586 "nbd_device": "/dev/nbd1", 00:11:19.586 "bdev_name": "Malloc1" 00:11:19.586 } 00:11:19.586 ]' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:19.586 { 00:11:19.586 "nbd_device": "/dev/nbd0", 00:11:19.586 "bdev_name": "Malloc0" 00:11:19.586 }, 00:11:19.586 { 00:11:19.586 "nbd_device": "/dev/nbd1", 00:11:19.586 "bdev_name": "Malloc1" 00:11:19.586 } 00:11:19.586 ]' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:19.586 /dev/nbd1' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:19.586 /dev/nbd1' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@65 -- # count=2 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@95 -- # count=2 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:19.586 256+0 records in 00:11:19.586 256+0 records out 00:11:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555046 s, 189 MB/s 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.586 256+0 records in 00:11:19.586 256+0 records out 00:11:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254942 s, 41.1 MB/s 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.586 256+0 records in 00:11:19.586 256+0 records out 00:11:19.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310045 s, 33.8 MB/s 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@51 -- # local i 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.586 12:55:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.845 12:55:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:20.103 12:55:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:20.103 12:55:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.103 12:55:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:20.103 12:55:23 -- bdev/nbd_common.sh@41 -- # break 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.103 12:55:24 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@41 -- # break 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.361 12:55:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@65 -- # true 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@65 -- # count=0 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@104 -- # count=0 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:20.620 12:55:24 -- bdev/nbd_common.sh@109 -- # return 0 00:11:20.620 12:55:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:21.186 12:55:25 -- event/event.sh@35 -- # sleep 3 00:11:22.120 [2024-04-17 12:55:26.254341] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:22.378 [2024-04-17 12:55:26.455522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.378 [2024-04-17 12:55:26.455527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.637 [2024-04-17 12:55:26.643899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:22.637 [2024-04-17 12:55:26.644001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:24.011 spdk_app_start Round 2 00:11:24.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:24.011 12:55:28 -- event/event.sh@23 -- # for i in {0..2} 00:11:24.011 12:55:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:24.011 12:55:28 -- event/event.sh@25 -- # waitforlisten 111276 /var/tmp/spdk-nbd.sock 00:11:24.011 12:55:28 -- common/autotest_common.sh@817 -- # '[' -z 111276 ']' 00:11:24.011 12:55:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:24.011 12:55:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:24.011 12:55:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:24.011 12:55:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:24.011 12:55:28 -- common/autotest_common.sh@10 -- # set +x 00:11:24.268 12:55:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:24.268 12:55:28 -- common/autotest_common.sh@850 -- # return 0 00:11:24.268 12:55:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.527 Malloc0 00:11:24.527 12:55:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:24.785 Malloc1 00:11:24.785 12:55:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@12 -- # local i 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.785 12:55:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:25.043 /dev/nbd0 00:11:25.043 12:55:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:25.043 12:55:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:25.043 12:55:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:25.043 12:55:29 -- common/autotest_common.sh@855 -- # local i 00:11:25.043 12:55:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:25.043 12:55:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:25.043 12:55:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:25.043 12:55:29 -- common/autotest_common.sh@859 -- # break 00:11:25.043 12:55:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:25.043 12:55:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:25.043 12:55:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:25.043 1+0 records in 00:11:25.043 1+0 records out 00:11:25.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555368 s, 7.4 MB/s 00:11:25.043 12:55:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.043 12:55:29 -- common/autotest_common.sh@872 -- # size=4096 00:11:25.043 12:55:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.043 12:55:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:25.043 12:55:29 -- common/autotest_common.sh@875 -- # return 0 00:11:25.043 12:55:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.043 12:55:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.043 12:55:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:25.611 /dev/nbd1 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:25.611 12:55:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:25.611 12:55:29 -- common/autotest_common.sh@855 -- # local i 00:11:25.611 12:55:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:25.611 12:55:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:25.611 12:55:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:25.611 12:55:29 -- common/autotest_common.sh@859 -- # break 00:11:25.611 12:55:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:25.611 12:55:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:25.611 12:55:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:25.611 1+0 records in 00:11:25.611 1+0 records out 00:11:25.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408242 s, 10.0 MB/s 00:11:25.611 12:55:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.611 12:55:29 -- common/autotest_common.sh@872 -- # size=4096 00:11:25.611 12:55:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:25.611 12:55:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:25.611 12:55:29 -- common/autotest_common.sh@875 -- # return 0 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.611 12:55:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:25.870 { 00:11:25.870 "nbd_device": "/dev/nbd0", 00:11:25.870 "bdev_name": "Malloc0" 00:11:25.870 }, 00:11:25.870 { 00:11:25.870 "nbd_device": "/dev/nbd1", 00:11:25.870 "bdev_name": "Malloc1" 00:11:25.870 } 00:11:25.870 ]' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:25.870 { 00:11:25.870 "nbd_device": "/dev/nbd0", 00:11:25.870 "bdev_name": "Malloc0" 00:11:25.870 }, 00:11:25.870 { 00:11:25.870 "nbd_device": "/dev/nbd1", 00:11:25.870 "bdev_name": "Malloc1" 00:11:25.870 } 00:11:25.870 ]' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:25.870 /dev/nbd1' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:25.870 /dev/nbd1' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@65 -- # count=2 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@95 -- # count=2 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:25.870 256+0 records in 00:11:25.870 256+0 records out 00:11:25.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00859757 s, 122 MB/s 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:25.870 256+0 records in 00:11:25.870 256+0 records out 00:11:25.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276275 s, 38.0 MB/s 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:25.870 256+0 records in 00:11:25.870 256+0 records out 00:11:25.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291399 s, 36.0 MB/s 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@51 -- # local i 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.870 12:55:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:26.129 12:55:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@41 -- # break 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.387 12:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@41 -- # break 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.645 12:55:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:26.646 12:55:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.646 12:55:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@65 -- # true 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@104 -- # count=0 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:26.906 12:55:30 -- bdev/nbd_common.sh@109 -- # return 0 00:11:26.906 12:55:30 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:27.472 12:55:31 -- event/event.sh@35 -- # sleep 3 00:11:28.848 [2024-04-17 12:55:32.588120] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.848 [2024-04-17 12:55:32.798498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.848 [2024-04-17 12:55:32.798503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.848 [2024-04-17 12:55:32.986409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:28.848 [2024-04-17 12:55:32.986567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:30.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:30.752 12:55:34 -- event/event.sh@38 -- # waitforlisten 111276 /var/tmp/spdk-nbd.sock 00:11:30.752 12:55:34 -- common/autotest_common.sh@817 -- # '[' -z 111276 ']' 00:11:30.752 12:55:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:30.752 12:55:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:30.752 12:55:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:30.752 12:55:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:30.752 12:55:34 -- common/autotest_common.sh@10 -- # set +x 00:11:30.752 12:55:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.752 12:55:34 -- common/autotest_common.sh@850 -- # return 0 00:11:30.752 12:55:34 -- event/event.sh@39 -- # killprocess 111276 00:11:30.752 12:55:34 -- common/autotest_common.sh@924 -- # '[' -z 111276 ']' 00:11:30.752 12:55:34 -- common/autotest_common.sh@928 -- # kill -0 111276 00:11:30.752 12:55:34 -- common/autotest_common.sh@929 -- # uname 00:11:30.752 12:55:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:30.752 12:55:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 111276 00:11:30.752 killing process with pid 111276 00:11:30.752 12:55:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:30.753 12:55:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:30.753 12:55:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 111276' 00:11:30.753 12:55:34 -- common/autotest_common.sh@943 -- # kill 111276 00:11:30.753 12:55:34 -- common/autotest_common.sh@948 -- # wait 111276 00:11:31.689 spdk_app_start is called in Round 0. 00:11:31.689 Shutdown signal received, stop current app iteration 00:11:31.689 Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 reinitialization... 00:11:31.689 spdk_app_start is called in Round 1. 00:11:31.689 Shutdown signal received, stop current app iteration 00:11:31.689 Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 reinitialization... 00:11:31.689 spdk_app_start is called in Round 2. 00:11:31.689 Shutdown signal received, stop current app iteration 00:11:31.689 Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 reinitialization... 00:11:31.689 spdk_app_start is called in Round 3. 00:11:31.689 Shutdown signal received, stop current app iteration 00:11:31.689 ************************************ 00:11:31.689 END TEST app_repeat 00:11:31.689 ************************************ 00:11:31.689 12:55:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:31.689 12:55:35 -- event/event.sh@42 -- # return 0 00:11:31.689 00:11:31.689 real 0m20.989s 00:11:31.689 user 0m44.853s 00:11:31.689 sys 0m2.760s 00:11:31.689 12:55:35 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:31.689 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.689 12:55:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:31.689 12:55:35 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:31.689 12:55:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:31.689 12:55:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:31.689 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.948 ************************************ 00:11:31.948 START TEST cpu_locks 00:11:31.948 ************************************ 00:11:31.948 12:55:35 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:31.948 * Looking for test storage... 00:11:31.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:31.948 12:55:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:31.948 12:55:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:31.948 12:55:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:31.948 12:55:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:31.948 12:55:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:31.948 12:55:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:31.948 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.948 ************************************ 00:11:31.948 START TEST default_locks 00:11:31.948 ************************************ 00:11:31.948 12:55:35 -- common/autotest_common.sh@1099 -- # default_locks 00:11:31.948 12:55:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=111864 00:11:31.948 12:55:35 -- event/cpu_locks.sh@47 -- # waitforlisten 111864 00:11:31.948 12:55:35 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:31.948 12:55:35 -- common/autotest_common.sh@817 -- # '[' -z 111864 ']' 00:11:31.948 12:55:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.948 12:55:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:31.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.948 12:55:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.948 12:55:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:31.948 12:55:35 -- common/autotest_common.sh@10 -- # set +x 00:11:31.948 [2024-04-17 12:55:36.082674] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:31.948 [2024-04-17 12:55:36.083108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111864 ] 00:11:32.268 [2024-04-17 12:55:36.253458] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.540 [2024-04-17 12:55:36.462939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.108 12:55:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:33.108 12:55:37 -- common/autotest_common.sh@850 -- # return 0 00:11:33.108 12:55:37 -- event/cpu_locks.sh@49 -- # locks_exist 111864 00:11:33.108 12:55:37 -- event/cpu_locks.sh@22 -- # lslocks -p 111864 00:11:33.108 12:55:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:33.367 12:55:37 -- event/cpu_locks.sh@50 -- # killprocess 111864 00:11:33.367 12:55:37 -- common/autotest_common.sh@924 -- # '[' -z 111864 ']' 00:11:33.367 12:55:37 -- common/autotest_common.sh@928 -- # kill -0 111864 00:11:33.626 12:55:37 -- common/autotest_common.sh@929 -- # uname 00:11:33.626 12:55:37 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:33.626 12:55:37 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 111864 00:11:33.626 12:55:37 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:33.626 killing process with pid 111864 00:11:33.626 12:55:37 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:33.626 12:55:37 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 111864' 00:11:33.626 12:55:37 -- common/autotest_common.sh@943 -- # kill 111864 00:11:33.626 12:55:37 -- common/autotest_common.sh@948 -- # wait 111864 00:11:36.161 12:55:39 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 111864 00:11:36.161 12:55:39 -- common/autotest_common.sh@638 -- # local es=0 00:11:36.161 12:55:39 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 111864 00:11:36.161 12:55:39 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:36.161 12:55:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:36.161 12:55:39 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:36.161 12:55:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:36.161 12:55:39 -- common/autotest_common.sh@641 -- # waitforlisten 111864 00:11:36.161 12:55:39 -- common/autotest_common.sh@817 -- # '[' -z 111864 ']' 00:11:36.161 12:55:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.161 12:55:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:36.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.161 12:55:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.161 12:55:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:36.161 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:11:36.161 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (111864) - No such process 00:11:36.161 ERROR: process (pid: 111864) is no longer running 00:11:36.161 12:55:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.161 12:55:39 -- common/autotest_common.sh@850 -- # return 1 00:11:36.161 12:55:39 -- common/autotest_common.sh@641 -- # es=1 00:11:36.162 12:55:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:36.162 12:55:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:36.162 12:55:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:36.162 12:55:39 -- event/cpu_locks.sh@54 -- # no_locks 00:11:36.162 12:55:39 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:36.162 12:55:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:36.162 12:55:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:36.162 00:11:36.162 real 0m3.778s 00:11:36.162 user 0m3.840s 00:11:36.162 sys 0m0.632s 00:11:36.162 12:55:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:36.162 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:11:36.162 ************************************ 00:11:36.162 END TEST default_locks 00:11:36.162 ************************************ 00:11:36.162 12:55:39 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:36.162 12:55:39 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:36.162 12:55:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:36.162 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:11:36.162 ************************************ 00:11:36.162 START TEST default_locks_via_rpc 00:11:36.162 ************************************ 00:11:36.162 12:55:39 -- common/autotest_common.sh@1099 -- # default_locks_via_rpc 00:11:36.162 12:55:39 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=111947 00:11:36.162 12:55:39 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:36.162 12:55:39 -- event/cpu_locks.sh@63 -- # waitforlisten 111947 00:11:36.162 12:55:39 -- common/autotest_common.sh@817 -- # '[' -z 111947 ']' 00:11:36.162 12:55:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.162 12:55:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:36.162 12:55:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.162 12:55:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:36.162 12:55:39 -- common/autotest_common.sh@10 -- # set +x 00:11:36.162 [2024-04-17 12:55:39.923259] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:36.162 [2024-04-17 12:55:39.923471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111947 ] 00:11:36.162 [2024-04-17 12:55:40.093286] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.421 [2024-04-17 12:55:40.340164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.987 12:55:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.987 12:55:41 -- common/autotest_common.sh@850 -- # return 0 00:11:36.987 12:55:41 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:36.987 12:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.987 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:11:36.987 12:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.987 12:55:41 -- event/cpu_locks.sh@67 -- # no_locks 00:11:36.987 12:55:41 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:11:36.988 12:55:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:36.988 12:55:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:36.988 12:55:41 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:36.988 12:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.988 12:55:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.246 12:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.246 12:55:41 -- event/cpu_locks.sh@71 -- # locks_exist 111947 00:11:37.246 12:55:41 -- event/cpu_locks.sh@22 -- # lslocks -p 111947 00:11:37.246 12:55:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:37.246 12:55:41 -- event/cpu_locks.sh@73 -- # killprocess 111947 00:11:37.246 12:55:41 -- common/autotest_common.sh@924 -- # '[' -z 111947 ']' 00:11:37.246 12:55:41 -- common/autotest_common.sh@928 -- # kill -0 111947 00:11:37.246 12:55:41 -- common/autotest_common.sh@929 -- # uname 00:11:37.246 12:55:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:37.246 12:55:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 111947 00:11:37.505 12:55:41 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:37.505 12:55:41 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:37.505 killing process with pid 111947 00:11:37.505 12:55:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 111947' 00:11:37.505 12:55:41 -- common/autotest_common.sh@943 -- # kill 111947 00:11:37.505 12:55:41 -- common/autotest_common.sh@948 -- # wait 111947 00:11:39.440 00:11:39.440 real 0m3.729s 00:11:39.440 user 0m3.706s 00:11:39.440 sys 0m0.632s 00:11:39.440 12:55:43 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:39.440 ************************************ 00:11:39.440 END TEST default_locks_via_rpc 00:11:39.440 ************************************ 00:11:39.440 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:11:39.699 12:55:43 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:39.699 12:55:43 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:39.699 12:55:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:39.699 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:11:39.699 ************************************ 00:11:39.699 START TEST non_locking_app_on_locked_coremask 00:11:39.699 ************************************ 00:11:39.699 12:55:43 -- common/autotest_common.sh@1099 -- # non_locking_app_on_locked_coremask 00:11:39.699 12:55:43 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:39.699 12:55:43 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=112029 00:11:39.699 12:55:43 -- event/cpu_locks.sh@81 -- # waitforlisten 112029 /var/tmp/spdk.sock 00:11:39.699 12:55:43 -- common/autotest_common.sh@817 -- # '[' -z 112029 ']' 00:11:39.699 12:55:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.699 12:55:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:39.699 12:55:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.699 12:55:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:39.699 12:55:43 -- common/autotest_common.sh@10 -- # set +x 00:11:39.699 [2024-04-17 12:55:43.717158] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:39.699 [2024-04-17 12:55:43.717595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112029 ] 00:11:39.958 [2024-04-17 12:55:43.889861] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.216 [2024-04-17 12:55:44.175508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.150 12:55:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:41.150 12:55:45 -- common/autotest_common.sh@850 -- # return 0 00:11:41.150 12:55:45 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=112050 00:11:41.150 12:55:45 -- event/cpu_locks.sh@85 -- # waitforlisten 112050 /var/tmp/spdk2.sock 00:11:41.150 12:55:45 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:41.150 12:55:45 -- common/autotest_common.sh@817 -- # '[' -z 112050 ']' 00:11:41.150 12:55:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:41.150 12:55:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:41.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:41.150 12:55:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:41.150 12:55:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:41.150 12:55:45 -- common/autotest_common.sh@10 -- # set +x 00:11:41.150 [2024-04-17 12:55:45.132840] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:41.150 [2024-04-17 12:55:45.133031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112050 ] 00:11:41.409 [2024-04-17 12:55:45.314185] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.409 [2024-04-17 12:55:45.314259] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.669 [2024-04-17 12:55:45.758424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.202 12:55:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.202 12:55:47 -- common/autotest_common.sh@850 -- # return 0 00:11:44.202 12:55:47 -- event/cpu_locks.sh@87 -- # locks_exist 112029 00:11:44.202 12:55:47 -- event/cpu_locks.sh@22 -- # lslocks -p 112029 00:11:44.202 12:55:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:44.461 12:55:48 -- event/cpu_locks.sh@89 -- # killprocess 112029 00:11:44.461 12:55:48 -- common/autotest_common.sh@924 -- # '[' -z 112029 ']' 00:11:44.461 12:55:48 -- common/autotest_common.sh@928 -- # kill -0 112029 00:11:44.461 12:55:48 -- common/autotest_common.sh@929 -- # uname 00:11:44.461 12:55:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:44.461 12:55:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112029 00:11:44.461 12:55:48 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:44.461 killing process with pid 112029 00:11:44.461 12:55:48 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:44.461 12:55:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112029' 00:11:44.461 12:55:48 -- common/autotest_common.sh@943 -- # kill 112029 00:11:44.461 12:55:48 -- common/autotest_common.sh@948 -- # wait 112029 00:11:49.730 12:55:53 -- event/cpu_locks.sh@90 -- # killprocess 112050 00:11:49.730 12:55:53 -- common/autotest_common.sh@924 -- # '[' -z 112050 ']' 00:11:49.730 12:55:53 -- common/autotest_common.sh@928 -- # kill -0 112050 00:11:49.730 12:55:53 -- common/autotest_common.sh@929 -- # uname 00:11:49.730 12:55:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:49.730 12:55:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112050 00:11:49.730 12:55:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:49.730 killing process with pid 112050 00:11:49.730 12:55:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:49.730 12:55:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112050' 00:11:49.730 12:55:53 -- common/autotest_common.sh@943 -- # kill 112050 00:11:49.730 12:55:53 -- common/autotest_common.sh@948 -- # wait 112050 00:11:52.258 00:11:52.258 real 0m12.451s 00:11:52.258 user 0m13.022s 00:11:52.258 sys 0m1.382s 00:11:52.258 ************************************ 00:11:52.258 END TEST non_locking_app_on_locked_coremask 00:11:52.258 ************************************ 00:11:52.258 12:55:56 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:11:52.258 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:52.258 12:55:56 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:52.258 12:55:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:52.258 12:55:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:52.258 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:52.258 ************************************ 00:11:52.258 START TEST locking_app_on_unlocked_coremask 00:11:52.258 ************************************ 00:11:52.258 12:55:56 -- common/autotest_common.sh@1099 -- # locking_app_on_unlocked_coremask 00:11:52.258 12:55:56 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=112260 00:11:52.258 12:55:56 -- event/cpu_locks.sh@99 -- # waitforlisten 112260 /var/tmp/spdk.sock 00:11:52.258 12:55:56 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:52.258 12:55:56 -- common/autotest_common.sh@817 -- # '[' -z 112260 ']' 00:11:52.258 12:55:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.258 12:55:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:52.258 12:55:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.258 12:55:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:52.258 12:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:52.258 [2024-04-17 12:55:56.241694] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:52.258 [2024-04-17 12:55:56.241878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112260 ] 00:11:52.258 [2024-04-17 12:55:56.401274] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:52.258 [2024-04-17 12:55:56.401348] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.517 [2024-04-17 12:55:56.606509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.452 12:55:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:53.452 12:55:57 -- common/autotest_common.sh@850 -- # return 0 00:11:53.452 12:55:57 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=112288 00:11:53.452 12:55:57 -- event/cpu_locks.sh@103 -- # waitforlisten 112288 /var/tmp/spdk2.sock 00:11:53.452 12:55:57 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:53.452 12:55:57 -- common/autotest_common.sh@817 -- # '[' -z 112288 ']' 00:11:53.452 12:55:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.452 12:55:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.452 12:55:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.452 12:55:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:53.452 12:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:53.452 [2024-04-17 12:55:57.427444] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:11:53.452 [2024-04-17 12:55:57.427641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112288 ] 00:11:53.710 [2024-04-17 12:55:57.597595] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.970 [2024-04-17 12:55:58.027759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.507 12:56:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.507 12:56:00 -- common/autotest_common.sh@850 -- # return 0 00:11:56.507 12:56:00 -- event/cpu_locks.sh@105 -- # locks_exist 112288 00:11:56.507 12:56:00 -- event/cpu_locks.sh@22 -- # lslocks -p 112288 00:11:56.507 12:56:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:56.507 12:56:00 -- event/cpu_locks.sh@107 -- # killprocess 112260 00:11:56.507 12:56:00 -- common/autotest_common.sh@924 -- # '[' -z 112260 ']' 00:11:56.507 12:56:00 -- common/autotest_common.sh@928 -- # kill -0 112260 00:11:56.507 12:56:00 -- common/autotest_common.sh@929 -- # uname 00:11:56.507 12:56:00 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:56.507 12:56:00 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112260 00:11:56.507 12:56:00 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:56.507 12:56:00 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:56.507 killing process with pid 112260 00:11:56.507 12:56:00 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112260' 00:11:56.507 12:56:00 -- common/autotest_common.sh@943 -- # kill 112260 00:11:56.507 12:56:00 -- common/autotest_common.sh@948 -- # wait 112260 00:12:00.724 12:56:04 -- event/cpu_locks.sh@108 -- # killprocess 112288 00:12:00.724 12:56:04 -- common/autotest_common.sh@924 -- # '[' -z 112288 ']' 00:12:00.724 12:56:04 -- common/autotest_common.sh@928 -- # kill -0 112288 00:12:00.724 12:56:04 -- common/autotest_common.sh@929 -- # uname 00:12:00.724 12:56:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:00.724 12:56:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112288 00:12:00.724 12:56:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:00.724 killing process with pid 112288 00:12:00.724 12:56:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:00.724 12:56:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112288' 00:12:00.724 12:56:04 -- common/autotest_common.sh@943 -- # kill 112288 00:12:00.724 12:56:04 -- common/autotest_common.sh@948 -- # wait 112288 00:12:03.281 00:12:03.281 real 0m10.863s 00:12:03.281 user 0m11.363s 00:12:03.281 sys 0m1.184s 00:12:03.281 12:56:07 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:03.281 ************************************ 00:12:03.281 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:12:03.281 END TEST locking_app_on_unlocked_coremask 00:12:03.281 ************************************ 00:12:03.281 12:56:07 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:03.281 12:56:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:03.281 12:56:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:03.281 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:12:03.281 ************************************ 00:12:03.281 START TEST locking_app_on_locked_coremask 00:12:03.281 ************************************ 00:12:03.281 12:56:07 -- common/autotest_common.sh@1099 -- # locking_app_on_locked_coremask 00:12:03.281 12:56:07 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=112459 00:12:03.281 12:56:07 -- event/cpu_locks.sh@116 -- # waitforlisten 112459 /var/tmp/spdk.sock 00:12:03.281 12:56:07 -- common/autotest_common.sh@817 -- # '[' -z 112459 ']' 00:12:03.281 12:56:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.281 12:56:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.281 12:56:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.281 12:56:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.281 12:56:07 -- common/autotest_common.sh@10 -- # set +x 00:12:03.281 12:56:07 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:03.281 [2024-04-17 12:56:07.186968] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:03.281 [2024-04-17 12:56:07.187354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112459 ] 00:12:03.281 [2024-04-17 12:56:07.358970] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.540 [2024-04-17 12:56:07.633267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.476 12:56:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.476 12:56:08 -- common/autotest_common.sh@850 -- # return 0 00:12:04.476 12:56:08 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=112480 00:12:04.476 12:56:08 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:04.476 12:56:08 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 112480 /var/tmp/spdk2.sock 00:12:04.476 12:56:08 -- common/autotest_common.sh@638 -- # local es=0 00:12:04.476 12:56:08 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 112480 /var/tmp/spdk2.sock 00:12:04.476 12:56:08 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:04.476 12:56:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:04.476 12:56:08 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:04.476 12:56:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:04.476 12:56:08 -- common/autotest_common.sh@641 -- # waitforlisten 112480 /var/tmp/spdk2.sock 00:12:04.476 12:56:08 -- common/autotest_common.sh@817 -- # '[' -z 112480 ']' 00:12:04.476 12:56:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:04.476 12:56:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:04.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:04.476 12:56:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:04.476 12:56:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:04.476 12:56:08 -- common/autotest_common.sh@10 -- # set +x 00:12:04.476 [2024-04-17 12:56:08.524689] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:04.477 [2024-04-17 12:56:08.524867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112480 ] 00:12:04.735 [2024-04-17 12:56:08.701871] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 112459 has claimed it. 00:12:04.735 [2024-04-17 12:56:08.701997] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:05.359 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (112480) - No such process 00:12:05.359 ERROR: process (pid: 112480) is no longer running 00:12:05.359 12:56:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.359 12:56:09 -- common/autotest_common.sh@850 -- # return 1 00:12:05.359 12:56:09 -- common/autotest_common.sh@641 -- # es=1 00:12:05.359 12:56:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:05.359 12:56:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:05.359 12:56:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:05.359 12:56:09 -- event/cpu_locks.sh@122 -- # locks_exist 112459 00:12:05.359 12:56:09 -- event/cpu_locks.sh@22 -- # lslocks -p 112459 00:12:05.359 12:56:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:05.359 12:56:09 -- event/cpu_locks.sh@124 -- # killprocess 112459 00:12:05.359 12:56:09 -- common/autotest_common.sh@924 -- # '[' -z 112459 ']' 00:12:05.359 12:56:09 -- common/autotest_common.sh@928 -- # kill -0 112459 00:12:05.359 12:56:09 -- common/autotest_common.sh@929 -- # uname 00:12:05.359 12:56:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:05.359 12:56:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112459 00:12:05.359 12:56:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:05.359 12:56:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:05.359 killing process with pid 112459 00:12:05.359 12:56:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112459' 00:12:05.359 12:56:09 -- common/autotest_common.sh@943 -- # kill 112459 00:12:05.359 12:56:09 -- common/autotest_common.sh@948 -- # wait 112459 00:12:07.890 00:12:07.890 real 0m4.526s 00:12:07.890 user 0m4.876s 00:12:07.890 sys 0m0.803s 00:12:07.890 12:56:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:07.890 ************************************ 00:12:07.890 END TEST locking_app_on_locked_coremask 00:12:07.890 ************************************ 00:12:07.890 12:56:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.890 12:56:11 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:07.890 12:56:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:07.890 12:56:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:07.890 12:56:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.890 ************************************ 00:12:07.890 START TEST locking_overlapped_coremask 00:12:07.890 ************************************ 00:12:07.890 12:56:11 -- common/autotest_common.sh@1099 -- # locking_overlapped_coremask 00:12:07.890 12:56:11 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=112560 00:12:07.890 12:56:11 -- event/cpu_locks.sh@133 -- # waitforlisten 112560 /var/tmp/spdk.sock 00:12:07.890 12:56:11 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:07.890 12:56:11 -- common/autotest_common.sh@817 -- # '[' -z 112560 ']' 00:12:07.890 12:56:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.890 12:56:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.890 12:56:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.890 12:56:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.890 12:56:11 -- common/autotest_common.sh@10 -- # set +x 00:12:07.890 [2024-04-17 12:56:11.800203] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:07.890 [2024-04-17 12:56:11.800485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112560 ] 00:12:07.890 [2024-04-17 12:56:11.985779] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.161 [2024-04-17 12:56:12.231870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.161 [2024-04-17 12:56:12.231954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.161 [2024-04-17 12:56:12.231967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.123 12:56:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.123 12:56:13 -- common/autotest_common.sh@850 -- # return 0 00:12:09.123 12:56:13 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=112588 00:12:09.123 12:56:13 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 112588 /var/tmp/spdk2.sock 00:12:09.123 12:56:13 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:09.123 12:56:13 -- common/autotest_common.sh@638 -- # local es=0 00:12:09.123 12:56:13 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 112588 /var/tmp/spdk2.sock 00:12:09.123 12:56:13 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:09.123 12:56:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.123 12:56:13 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:09.123 12:56:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:09.123 12:56:13 -- common/autotest_common.sh@641 -- # waitforlisten 112588 /var/tmp/spdk2.sock 00:12:09.123 12:56:13 -- common/autotest_common.sh@817 -- # '[' -z 112588 ']' 00:12:09.123 12:56:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:09.123 12:56:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:09.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:09.123 12:56:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:09.123 12:56:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:09.123 12:56:13 -- common/autotest_common.sh@10 -- # set +x 00:12:09.123 [2024-04-17 12:56:13.109843] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:09.123 [2024-04-17 12:56:13.110055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112588 ] 00:12:09.382 [2024-04-17 12:56:13.308054] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 112560 has claimed it. 00:12:09.382 [2024-04-17 12:56:13.308164] app.c: 814:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:09.947 ERROR: process (pid: 112588) is no longer running 00:12:09.947 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (112588) - No such process 00:12:09.947 12:56:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.947 12:56:13 -- common/autotest_common.sh@850 -- # return 1 00:12:09.947 12:56:13 -- common/autotest_common.sh@641 -- # es=1 00:12:09.947 12:56:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:09.947 12:56:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:09.947 12:56:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:09.947 12:56:13 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:09.947 12:56:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:09.947 12:56:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:09.947 12:56:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:09.947 12:56:13 -- event/cpu_locks.sh@141 -- # killprocess 112560 00:12:09.947 12:56:13 -- common/autotest_common.sh@924 -- # '[' -z 112560 ']' 00:12:09.947 12:56:13 -- common/autotest_common.sh@928 -- # kill -0 112560 00:12:09.947 12:56:13 -- common/autotest_common.sh@929 -- # uname 00:12:09.948 12:56:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:09.948 12:56:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112560 00:12:09.948 12:56:13 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:09.948 killing process with pid 112560 00:12:09.948 12:56:13 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:09.948 12:56:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112560' 00:12:09.948 12:56:13 -- common/autotest_common.sh@943 -- # kill 112560 00:12:09.948 12:56:13 -- common/autotest_common.sh@948 -- # wait 112560 00:12:12.476 00:12:12.476 real 0m4.853s 00:12:12.476 user 0m12.673s 00:12:12.476 sys 0m0.637s 00:12:12.476 12:56:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:12.476 ************************************ 00:12:12.476 END TEST locking_overlapped_coremask 00:12:12.476 ************************************ 00:12:12.476 12:56:16 -- common/autotest_common.sh@10 -- # set +x 00:12:12.476 12:56:16 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:12.476 12:56:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:12.476 12:56:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:12.476 12:56:16 -- common/autotest_common.sh@10 -- # set +x 00:12:12.735 ************************************ 00:12:12.735 START TEST locking_overlapped_coremask_via_rpc 00:12:12.735 ************************************ 00:12:12.735 12:56:16 -- common/autotest_common.sh@1099 -- # locking_overlapped_coremask_via_rpc 00:12:12.735 12:56:16 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=112683 00:12:12.735 12:56:16 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:12.735 12:56:16 -- event/cpu_locks.sh@149 -- # waitforlisten 112683 /var/tmp/spdk.sock 00:12:12.735 12:56:16 -- common/autotest_common.sh@817 -- # '[' -z 112683 ']' 00:12:12.735 12:56:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.735 12:56:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:12.735 12:56:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.735 12:56:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:12.735 12:56:16 -- common/autotest_common.sh@10 -- # set +x 00:12:12.735 [2024-04-17 12:56:16.717502] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:12.735 [2024-04-17 12:56:16.717694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112683 ] 00:12:12.994 [2024-04-17 12:56:16.892044] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:12.994 [2024-04-17 12:56:16.892131] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.994 [2024-04-17 12:56:17.100650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.994 [2024-04-17 12:56:17.100737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.994 [2024-04-17 12:56:17.100751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.931 12:56:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:13.931 12:56:17 -- common/autotest_common.sh@850 -- # return 0 00:12:13.931 12:56:17 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=112706 00:12:13.931 12:56:17 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:13.931 12:56:17 -- event/cpu_locks.sh@153 -- # waitforlisten 112706 /var/tmp/spdk2.sock 00:12:13.931 12:56:17 -- common/autotest_common.sh@817 -- # '[' -z 112706 ']' 00:12:13.931 12:56:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:13.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:13.931 12:56:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:13.931 12:56:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:13.931 12:56:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:13.931 12:56:17 -- common/autotest_common.sh@10 -- # set +x 00:12:13.931 [2024-04-17 12:56:17.955642] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:13.931 [2024-04-17 12:56:17.956456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112706 ] 00:12:14.191 [2024-04-17 12:56:18.142241] app.c: 818:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:14.191 [2024-04-17 12:56:18.142365] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:14.449 [2024-04-17 12:56:18.576204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.450 [2024-04-17 12:56:18.587935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.450 [2024-04-17 12:56:18.587939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:17.009 12:56:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:17.009 12:56:20 -- common/autotest_common.sh@850 -- # return 0 00:12:17.009 12:56:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:17.009 12:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.009 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:12:17.009 12:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:17.009 12:56:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:17.009 12:56:20 -- common/autotest_common.sh@638 -- # local es=0 00:12:17.009 12:56:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:17.009 12:56:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:12:17.009 12:56:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.009 12:56:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:12:17.009 12:56:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.009 12:56:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:17.009 12:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.009 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:12:17.009 [2024-04-17 12:56:20.691980] app.c: 688:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 112683 has claimed it. 00:12:17.009 request: 00:12:17.009 { 00:12:17.009 "method": "framework_enable_cpumask_locks", 00:12:17.009 "req_id": 1 00:12:17.009 } 00:12:17.009 Got JSON-RPC error response 00:12:17.009 response: 00:12:17.009 { 00:12:17.009 "code": -32603, 00:12:17.009 "message": "Failed to claim CPU core: 2" 00:12:17.009 } 00:12:17.009 12:56:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:17.009 12:56:20 -- common/autotest_common.sh@641 -- # es=1 00:12:17.009 12:56:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:17.009 12:56:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:17.009 12:56:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:17.009 12:56:20 -- event/cpu_locks.sh@158 -- # waitforlisten 112683 /var/tmp/spdk.sock 00:12:17.009 12:56:20 -- common/autotest_common.sh@817 -- # '[' -z 112683 ']' 00:12:17.009 12:56:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.009 12:56:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.009 12:56:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.009 12:56:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.009 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:12:17.009 12:56:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:17.009 12:56:20 -- common/autotest_common.sh@850 -- # return 0 00:12:17.009 12:56:20 -- event/cpu_locks.sh@159 -- # waitforlisten 112706 /var/tmp/spdk2.sock 00:12:17.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:17.009 12:56:20 -- common/autotest_common.sh@817 -- # '[' -z 112706 ']' 00:12:17.009 12:56:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:17.009 12:56:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.009 12:56:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:17.009 12:56:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.009 12:56:20 -- common/autotest_common.sh@10 -- # set +x 00:12:17.268 12:56:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:17.268 12:56:21 -- common/autotest_common.sh@850 -- # return 0 00:12:17.268 12:56:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:17.268 12:56:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:17.268 12:56:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:17.268 12:56:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:17.268 00:12:17.268 real 0m4.543s 00:12:17.268 user 0m1.547s 00:12:17.268 sys 0m0.166s 00:12:17.269 12:56:21 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:17.269 12:56:21 -- common/autotest_common.sh@10 -- # set +x 00:12:17.269 ************************************ 00:12:17.269 END TEST locking_overlapped_coremask_via_rpc 00:12:17.269 ************************************ 00:12:17.269 12:56:21 -- event/cpu_locks.sh@174 -- # cleanup 00:12:17.269 12:56:21 -- event/cpu_locks.sh@15 -- # [[ -z 112683 ]] 00:12:17.269 12:56:21 -- event/cpu_locks.sh@15 -- # killprocess 112683 00:12:17.269 12:56:21 -- common/autotest_common.sh@924 -- # '[' -z 112683 ']' 00:12:17.269 12:56:21 -- common/autotest_common.sh@928 -- # kill -0 112683 00:12:17.269 12:56:21 -- common/autotest_common.sh@929 -- # uname 00:12:17.269 12:56:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:17.269 12:56:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112683 00:12:17.269 12:56:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:17.269 killing process with pid 112683 00:12:17.269 12:56:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:17.269 12:56:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112683' 00:12:17.269 12:56:21 -- common/autotest_common.sh@943 -- # kill 112683 00:12:17.269 12:56:21 -- common/autotest_common.sh@948 -- # wait 112683 00:12:19.834 12:56:23 -- event/cpu_locks.sh@16 -- # [[ -z 112706 ]] 00:12:19.834 12:56:23 -- event/cpu_locks.sh@16 -- # killprocess 112706 00:12:19.834 12:56:23 -- common/autotest_common.sh@924 -- # '[' -z 112706 ']' 00:12:19.834 12:56:23 -- common/autotest_common.sh@928 -- # kill -0 112706 00:12:19.834 12:56:23 -- common/autotest_common.sh@929 -- # uname 00:12:19.834 12:56:23 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:19.834 12:56:23 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112706 00:12:19.834 12:56:23 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:12:19.834 killing process with pid 112706 00:12:19.834 12:56:23 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:12:19.834 12:56:23 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112706' 00:12:19.834 12:56:23 -- common/autotest_common.sh@943 -- # kill 112706 00:12:19.834 12:56:23 -- common/autotest_common.sh@948 -- # wait 112706 00:12:21.736 12:56:25 -- event/cpu_locks.sh@18 -- # rm -f 00:12:21.736 12:56:25 -- event/cpu_locks.sh@1 -- # cleanup 00:12:21.736 12:56:25 -- event/cpu_locks.sh@15 -- # [[ -z 112683 ]] 00:12:21.736 12:56:25 -- event/cpu_locks.sh@15 -- # killprocess 112683 00:12:21.736 12:56:25 -- common/autotest_common.sh@924 -- # '[' -z 112683 ']' 00:12:21.736 12:56:25 -- common/autotest_common.sh@928 -- # kill -0 112683 00:12:21.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (112683) - No such process 00:12:21.736 Process with pid 112683 is not found 00:12:21.736 12:56:25 -- common/autotest_common.sh@951 -- # echo 'Process with pid 112683 is not found' 00:12:21.736 12:56:25 -- event/cpu_locks.sh@16 -- # [[ -z 112706 ]] 00:12:21.736 12:56:25 -- event/cpu_locks.sh@16 -- # killprocess 112706 00:12:21.736 12:56:25 -- common/autotest_common.sh@924 -- # '[' -z 112706 ']' 00:12:21.736 12:56:25 -- common/autotest_common.sh@928 -- # kill -0 112706 00:12:21.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (112706) - No such process 00:12:21.736 Process with pid 112706 is not found 00:12:21.736 12:56:25 -- common/autotest_common.sh@951 -- # echo 'Process with pid 112706 is not found' 00:12:21.736 12:56:25 -- event/cpu_locks.sh@18 -- # rm -f 00:12:21.736 ************************************ 00:12:21.736 END TEST cpu_locks 00:12:21.736 ************************************ 00:12:21.736 00:12:21.736 real 0m49.807s 00:12:21.736 user 1m25.238s 00:12:21.736 sys 0m6.498s 00:12:21.736 12:56:25 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:21.736 12:56:25 -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 00:12:21.736 real 1m22.164s 00:12:21.736 user 2m26.751s 00:12:21.736 sys 0m10.327s 00:12:21.736 12:56:25 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:21.736 ************************************ 00:12:21.736 END TEST event 00:12:21.736 ************************************ 00:12:21.736 12:56:25 -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 12:56:25 -- spdk/autotest.sh@177 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:21.736 12:56:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:21.736 12:56:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:21.736 12:56:25 -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 ************************************ 00:12:21.736 START TEST thread 00:12:21.736 ************************************ 00:12:21.736 12:56:25 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:21.736 * Looking for test storage... 00:12:21.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:21.736 12:56:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:21.736 12:56:25 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:12:21.736 12:56:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:21.736 12:56:25 -- common/autotest_common.sh@10 -- # set +x 00:12:21.994 ************************************ 00:12:21.994 START TEST thread_poller_perf 00:12:21.994 ************************************ 00:12:21.994 12:56:25 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:21.994 [2024-04-17 12:56:25.953873] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:21.994 [2024-04-17 12:56:25.954140] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112911 ] 00:12:21.995 [2024-04-17 12:56:26.136745] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.253 [2024-04-17 12:56:26.350370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.253 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:23.700 ====================================== 00:12:23.700 busy:2211871106 (cyc) 00:12:23.700 total_run_count: 279000 00:12:23.700 tsc_hz: 2200000000 (cyc) 00:12:23.700 ====================================== 00:12:23.700 poller_cost: 7927 (cyc), 3603 (nsec) 00:12:23.700 ************************************ 00:12:23.700 END TEST thread_poller_perf 00:12:23.700 ************************************ 00:12:23.700 00:12:23.700 real 0m1.839s 00:12:23.700 user 0m1.614s 00:12:23.700 sys 0m0.125s 00:12:23.700 12:56:27 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:23.700 12:56:27 -- common/autotest_common.sh@10 -- # set +x 00:12:23.700 12:56:27 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:23.700 12:56:27 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:12:23.700 12:56:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:23.700 12:56:27 -- common/autotest_common.sh@10 -- # set +x 00:12:23.700 ************************************ 00:12:23.700 START TEST thread_poller_perf 00:12:23.700 ************************************ 00:12:23.701 12:56:27 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:23.960 [2024-04-17 12:56:27.850984] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:23.960 [2024-04-17 12:56:27.851142] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112984 ] 00:12:23.960 [2024-04-17 12:56:28.004663] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.219 [2024-04-17 12:56:28.217259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.219 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:25.597 ====================================== 00:12:25.597 busy:2204931303 (cyc) 00:12:25.597 total_run_count: 3858000 00:12:25.597 tsc_hz: 2200000000 (cyc) 00:12:25.597 ====================================== 00:12:25.597 poller_cost: 571 (cyc), 259 (nsec) 00:12:25.597 00:12:25.597 real 0m1.782s 00:12:25.597 user 0m1.581s 00:12:25.597 sys 0m0.100s 00:12:25.597 12:56:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:25.597 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:12:25.597 ************************************ 00:12:25.597 END TEST thread_poller_perf 00:12:25.597 ************************************ 00:12:25.597 12:56:29 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:25.597 12:56:29 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:25.597 12:56:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:25.597 12:56:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:25.597 12:56:29 -- common/autotest_common.sh@10 -- # set +x 00:12:25.597 ************************************ 00:12:25.597 START TEST thread_spdk_lock 00:12:25.597 ************************************ 00:12:25.597 12:56:29 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:25.597 [2024-04-17 12:56:29.713371] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:25.597 [2024-04-17 12:56:29.713540] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113030 ] 00:12:25.856 [2024-04-17 12:56:29.882989] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.114 [2024-04-17 12:56:30.153138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.114 [2024-04-17 12:56:30.153147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.684 [2024-04-17 12:56:30.697855] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:26.684 [2024-04-17 12:56:30.698015] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:26.684 [2024-04-17 12:56:30.698051] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5631851af500 00:12:26.684 [2024-04-17 12:56:30.706444] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:26.684 [2024-04-17 12:56:30.706585] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:26.684 [2024-04-17 12:56:30.706626] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:27.251 Starting test contend 00:12:27.251 Worker Delay Wait us Hold us Total us 00:12:27.251 0 3 117744 194988 312733 00:12:27.251 1 5 44497 301134 345632 00:12:27.251 PASS test contend 00:12:27.251 Starting test hold_by_poller 00:12:27.251 PASS test hold_by_poller 00:12:27.251 Starting test hold_by_message 00:12:27.251 PASS test hold_by_message 00:12:27.251 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:27.251 100014 assertions passed 00:12:27.251 0 assertions failed 00:12:27.251 ************************************ 00:12:27.251 END TEST thread_spdk_lock 00:12:27.251 ************************************ 00:12:27.251 00:12:27.251 real 0m1.422s 00:12:27.251 user 0m1.776s 00:12:27.251 sys 0m0.100s 00:12:27.251 12:56:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:27.251 12:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:27.251 00:12:27.251 real 0m5.349s 00:12:27.251 user 0m5.127s 00:12:27.251 sys 0m0.465s 00:12:27.251 12:56:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:27.251 12:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:27.251 ************************************ 00:12:27.251 END TEST thread 00:12:27.251 ************************************ 00:12:27.251 12:56:31 -- spdk/autotest.sh@178 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:27.251 12:56:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:27.251 12:56:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:27.251 12:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:27.251 ************************************ 00:12:27.251 START TEST accel 00:12:27.251 ************************************ 00:12:27.251 12:56:31 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:27.251 * Looking for test storage... 00:12:27.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:27.251 12:56:31 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:27.251 12:56:31 -- accel/accel.sh@82 -- # get_expected_opcs 00:12:27.252 12:56:31 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:27.252 12:56:31 -- accel/accel.sh@62 -- # spdk_tgt_pid=113121 00:12:27.252 12:56:31 -- accel/accel.sh@63 -- # waitforlisten 113121 00:12:27.252 12:56:31 -- common/autotest_common.sh@817 -- # '[' -z 113121 ']' 00:12:27.252 12:56:31 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:27.252 12:56:31 -- accel/accel.sh@61 -- # build_accel_config 00:12:27.252 12:56:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.252 12:56:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.252 12:56:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.252 12:56:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.252 12:56:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.252 12:56:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.252 12:56:31 -- accel/accel.sh@40 -- # local IFS=, 00:12:27.252 12:56:31 -- accel/accel.sh@41 -- # jq -r . 00:12:27.252 12:56:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.252 12:56:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.252 12:56:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.252 12:56:31 -- common/autotest_common.sh@10 -- # set +x 00:12:27.252 [2024-04-17 12:56:31.364496] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:27.252 [2024-04-17 12:56:31.364685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113121 ] 00:12:27.511 [2024-04-17 12:56:31.529458] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.770 [2024-04-17 12:56:31.743445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.709 12:56:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.709 12:56:32 -- common/autotest_common.sh@850 -- # return 0 00:12:28.709 12:56:32 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:28.709 12:56:32 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:28.709 12:56:32 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:28.709 12:56:32 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:28.709 12:56:32 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:28.709 12:56:32 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:28.709 12:56:32 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:28.709 12:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.709 12:56:32 -- common/autotest_common.sh@10 -- # set +x 00:12:28.709 12:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # IFS== 00:12:28.709 12:56:32 -- accel/accel.sh@72 -- # read -r opc module 00:12:28.709 12:56:32 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:28.709 12:56:32 -- accel/accel.sh@75 -- # killprocess 113121 00:12:28.709 12:56:32 -- common/autotest_common.sh@924 -- # '[' -z 113121 ']' 00:12:28.709 12:56:32 -- common/autotest_common.sh@928 -- # kill -0 113121 00:12:28.709 12:56:32 -- common/autotest_common.sh@929 -- # uname 00:12:28.709 12:56:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:12:28.709 12:56:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 113121 00:12:28.709 12:56:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:12:28.709 12:56:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:12:28.709 12:56:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 113121' 00:12:28.709 killing process with pid 113121 00:12:28.709 12:56:32 -- common/autotest_common.sh@943 -- # kill 113121 00:12:28.710 12:56:32 -- common/autotest_common.sh@948 -- # wait 113121 00:12:30.619 12:56:34 -- accel/accel.sh@76 -- # trap - ERR 00:12:30.619 12:56:34 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:30.619 12:56:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:30.619 12:56:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:30.619 12:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:30.878 12:56:34 -- common/autotest_common.sh@1099 -- # accel_perf -h 00:12:30.878 12:56:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:30.878 12:56:34 -- accel/accel.sh@12 -- # build_accel_config 00:12:30.878 12:56:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.878 12:56:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.878 12:56:34 -- accel/accel.sh@40 -- # local IFS=, 00:12:30.878 12:56:34 -- accel/accel.sh@41 -- # jq -r . 00:12:30.878 12:56:34 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:30.878 12:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:30.878 12:56:34 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:30.878 12:56:34 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:30.878 12:56:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:30.878 12:56:34 -- common/autotest_common.sh@10 -- # set +x 00:12:30.878 ************************************ 00:12:30.878 START TEST accel_missing_filename 00:12:30.878 ************************************ 00:12:30.878 12:56:34 -- common/autotest_common.sh@1099 -- # NOT accel_perf -t 1 -w compress 00:12:30.878 12:56:34 -- common/autotest_common.sh@638 -- # local es=0 00:12:30.878 12:56:34 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:30.878 12:56:34 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:30.878 12:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.878 12:56:34 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:30.878 12:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.878 12:56:34 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:12:30.878 12:56:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:30.878 12:56:34 -- accel/accel.sh@12 -- # build_accel_config 00:12:30.878 12:56:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.878 12:56:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.878 12:56:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.878 12:56:34 -- accel/accel.sh@40 -- # local IFS=, 00:12:30.878 12:56:34 -- accel/accel.sh@41 -- # jq -r . 00:12:30.878 [2024-04-17 12:56:34.968089] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:30.878 [2024-04-17 12:56:34.968278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113221 ] 00:12:31.137 [2024-04-17 12:56:35.139446] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.395 [2024-04-17 12:56:35.369698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.653 [2024-04-17 12:56:35.585144] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:32.219 [2024-04-17 12:56:36.115559] accel_perf.c:1466:main: *ERROR*: ERROR starting application 00:12:32.477 A filename is required. 00:12:32.477 12:56:36 -- common/autotest_common.sh@641 -- # es=234 00:12:32.477 12:56:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:32.477 12:56:36 -- common/autotest_common.sh@650 -- # es=106 00:12:32.477 ************************************ 00:12:32.477 END TEST accel_missing_filename 00:12:32.477 ************************************ 00:12:32.477 12:56:36 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:32.477 12:56:36 -- common/autotest_common.sh@658 -- # es=1 00:12:32.477 12:56:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:32.477 00:12:32.477 real 0m1.573s 00:12:32.477 user 0m1.336s 00:12:32.477 sys 0m0.195s 00:12:32.477 12:56:36 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:32.477 12:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:32.477 12:56:36 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:32.477 12:56:36 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:12:32.477 12:56:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:32.477 12:56:36 -- common/autotest_common.sh@10 -- # set +x 00:12:32.477 ************************************ 00:12:32.477 START TEST accel_compress_verify 00:12:32.477 ************************************ 00:12:32.477 12:56:36 -- common/autotest_common.sh@1099 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:32.477 12:56:36 -- common/autotest_common.sh@638 -- # local es=0 00:12:32.477 12:56:36 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:32.477 12:56:36 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:32.477 12:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:32.477 12:56:36 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:32.477 12:56:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:32.477 12:56:36 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:32.477 12:56:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:32.477 12:56:36 -- accel/accel.sh@12 -- # build_accel_config 00:12:32.477 12:56:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:32.477 12:56:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:32.477 12:56:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:32.477 12:56:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:32.477 12:56:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:32.477 12:56:36 -- accel/accel.sh@40 -- # local IFS=, 00:12:32.477 12:56:36 -- accel/accel.sh@41 -- # jq -r . 00:12:32.735 [2024-04-17 12:56:36.623060] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:32.735 [2024-04-17 12:56:36.623315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113267 ] 00:12:32.735 [2024-04-17 12:56:36.800308] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.993 [2024-04-17 12:56:37.056580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.252 [2024-04-17 12:56:37.276586] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:33.818 [2024-04-17 12:56:37.776867] accel_perf.c:1466:main: *ERROR*: ERROR starting application 00:12:34.076 00:12:34.076 Compression does not support the verify option, aborting. 00:12:34.076 ************************************ 00:12:34.076 END TEST accel_compress_verify 00:12:34.076 ************************************ 00:12:34.076 12:56:38 -- common/autotest_common.sh@641 -- # es=161 00:12:34.076 12:56:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:34.076 12:56:38 -- common/autotest_common.sh@650 -- # es=33 00:12:34.076 12:56:38 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:34.076 12:56:38 -- common/autotest_common.sh@658 -- # es=1 00:12:34.076 12:56:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:34.076 00:12:34.076 real 0m1.589s 00:12:34.076 user 0m1.347s 00:12:34.076 sys 0m0.215s 00:12:34.076 12:56:38 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:34.076 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.076 12:56:38 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:34.076 12:56:38 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:34.076 12:56:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:34.076 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.335 ************************************ 00:12:34.335 START TEST accel_wrong_workload 00:12:34.335 ************************************ 00:12:34.335 12:56:38 -- common/autotest_common.sh@1099 -- # NOT accel_perf -t 1 -w foobar 00:12:34.335 12:56:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:34.335 12:56:38 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:34.335 12:56:38 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:34.335 12:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.335 12:56:38 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:34.335 12:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.335 12:56:38 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:12:34.335 12:56:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:34.335 12:56:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:34.335 12:56:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:34.335 12:56:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:34.335 12:56:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:34.335 12:56:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:34.335 12:56:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:34.335 12:56:38 -- accel/accel.sh@40 -- # local IFS=, 00:12:34.335 12:56:38 -- accel/accel.sh@41 -- # jq -r . 00:12:34.335 Unsupported workload type: foobar 00:12:34.335 [2024-04-17 12:56:38.276314] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:34.335 accel_perf options: 00:12:34.335 [-h help message] 00:12:34.335 [-q queue depth per core] 00:12:34.335 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:34.335 [-T number of threads per core 00:12:34.335 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:34.335 [-t time in seconds] 00:12:34.335 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:34.335 [ dif_verify, , dif_generate, dif_generate_copy, dif_strip 00:12:34.335 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:34.335 [-l for compress/decompress workloads, name of uncompressed input file 00:12:34.335 [-S for crc32c workload, use this seed value (default 0) 00:12:34.335 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:34.335 [-f for fill workload, use this BYTE value (default 255) 00:12:34.335 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:34.335 [-y verify result if this switch is on] 00:12:34.335 [-a tasks to allocate per core (default: same value as -q)] 00:12:34.336 Can be used to spread operations across a wider range of memory. 00:12:34.336 12:56:38 -- common/autotest_common.sh@641 -- # es=1 00:12:34.336 12:56:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:34.336 12:56:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:34.336 12:56:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:34.336 00:12:34.336 real 0m0.062s 00:12:34.336 user 0m0.072s 00:12:34.336 sys 0m0.037s 00:12:34.336 12:56:38 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:34.336 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.336 ************************************ 00:12:34.336 END TEST accel_wrong_workload 00:12:34.336 ************************************ 00:12:34.336 12:56:38 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:34.336 12:56:38 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:12:34.336 12:56:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:34.336 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.336 ************************************ 00:12:34.336 START TEST accel_negative_buffers 00:12:34.336 ************************************ 00:12:34.336 12:56:38 -- common/autotest_common.sh@1099 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:34.336 12:56:38 -- common/autotest_common.sh@638 -- # local es=0 00:12:34.336 12:56:38 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:34.336 12:56:38 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:34.336 12:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.336 12:56:38 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:34.336 12:56:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:34.336 12:56:38 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:12:34.336 12:56:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:34.336 12:56:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:34.336 12:56:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:34.336 12:56:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:34.336 12:56:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:34.336 12:56:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:34.336 12:56:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:34.336 12:56:38 -- accel/accel.sh@40 -- # local IFS=, 00:12:34.336 12:56:38 -- accel/accel.sh@41 -- # jq -r . 00:12:34.336 -x option must be non-negative. 00:12:34.336 [2024-04-17 12:56:38.410514] app.c:1339:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:34.336 accel_perf options: 00:12:34.336 [-h help message] 00:12:34.336 [-q queue depth per core] 00:12:34.336 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:34.336 [-T number of threads per core 00:12:34.336 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:34.336 [-t time in seconds] 00:12:34.336 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:34.336 [ dif_verify, , dif_generate, dif_generate_copy, dif_strip 00:12:34.336 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:34.336 [-l for compress/decompress workloads, name of uncompressed input file 00:12:34.336 [-S for crc32c workload, use this seed value (default 0) 00:12:34.336 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:34.336 [-f for fill workload, use this BYTE value (default 255) 00:12:34.336 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:34.336 [-y verify result if this switch is on] 00:12:34.336 [-a tasks to allocate per core (default: same value as -q)] 00:12:34.336 Can be used to spread operations across a wider range of memory. 00:12:34.336 12:56:38 -- common/autotest_common.sh@641 -- # es=1 00:12:34.336 12:56:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:34.336 12:56:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:34.336 12:56:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:34.336 00:12:34.336 real 0m0.064s 00:12:34.336 user 0m0.025s 00:12:34.336 sys 0m0.038s 00:12:34.336 12:56:38 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:34.336 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.336 ************************************ 00:12:34.336 END TEST accel_negative_buffers 00:12:34.336 ************************************ 00:12:34.336 12:56:38 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:34.336 12:56:38 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:12:34.336 12:56:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:34.336 12:56:38 -- common/autotest_common.sh@10 -- # set +x 00:12:34.594 ************************************ 00:12:34.594 START TEST accel_crc32c 00:12:34.594 ************************************ 00:12:34.594 12:56:38 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:34.594 12:56:38 -- accel/accel.sh@16 -- # local accel_opc 00:12:34.594 12:56:38 -- accel/accel.sh@17 -- # local accel_module 00:12:34.594 12:56:38 -- accel/accel.sh@19 -- # IFS=: 00:12:34.594 12:56:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:34.594 12:56:38 -- accel/accel.sh@19 -- # read -r var val 00:12:34.594 12:56:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:34.594 12:56:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:34.594 12:56:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:34.594 12:56:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:34.594 12:56:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:34.594 12:56:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:34.594 12:56:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:34.594 12:56:38 -- accel/accel.sh@40 -- # local IFS=, 00:12:34.594 12:56:38 -- accel/accel.sh@41 -- # jq -r . 00:12:34.594 [2024-04-17 12:56:38.556529] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:34.594 [2024-04-17 12:56:38.557058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113385 ] 00:12:34.594 [2024-04-17 12:56:38.725820] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.853 [2024-04-17 12:56:38.940926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=0x1 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=crc32c 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=32 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=software 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@22 -- # accel_module=software 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=32 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=32 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=1 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val=Yes 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:35.112 12:56:39 -- accel/accel.sh@20 -- # val= 00:12:35.112 12:56:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # IFS=: 00:12:35.112 12:56:39 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.021 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 ************************************ 00:12:37.021 END TEST accel_crc32c 00:12:37.021 ************************************ 00:12:37.021 12:56:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.021 12:56:41 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:37.021 12:56:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.021 00:12:37.021 real 0m2.540s 00:12:37.021 user 0m2.267s 00:12:37.021 sys 0m0.201s 00:12:37.021 12:56:41 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:37.021 12:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:37.021 12:56:41 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:37.021 12:56:41 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:12:37.021 12:56:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:37.021 12:56:41 -- common/autotest_common.sh@10 -- # set +x 00:12:37.021 ************************************ 00:12:37.021 START TEST accel_crc32c_C2 00:12:37.021 ************************************ 00:12:37.021 12:56:41 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:37.021 12:56:41 -- accel/accel.sh@16 -- # local accel_opc 00:12:37.021 12:56:41 -- accel/accel.sh@17 -- # local accel_module 00:12:37.021 12:56:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.021 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.021 12:56:41 -- accel/accel.sh@12 -- # build_accel_config 00:12:37.021 12:56:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:37.021 12:56:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.021 12:56:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.021 12:56:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.021 12:56:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.021 12:56:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.021 12:56:41 -- accel/accel.sh@40 -- # local IFS=, 00:12:37.021 12:56:41 -- accel/accel.sh@41 -- # jq -r . 00:12:37.281 [2024-04-17 12:56:41.166772] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:37.281 [2024-04-17 12:56:41.168757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113451 ] 00:12:37.281 [2024-04-17 12:56:41.339668] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.540 [2024-04-17 12:56:41.593487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=0x1 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=crc32c 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=0 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=software 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@22 -- # accel_module=software 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=32 00:12:37.799 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.799 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.799 12:56:41 -- accel/accel.sh@20 -- # val=32 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.800 12:56:41 -- accel/accel.sh@20 -- # val=1 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.800 12:56:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.800 12:56:41 -- accel/accel.sh@20 -- # val=Yes 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.800 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:37.800 12:56:41 -- accel/accel.sh@20 -- # val= 00:12:37.800 12:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # IFS=: 00:12:37.800 12:56:41 -- accel/accel.sh@19 -- # read -r var val 00:12:39.705 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.705 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.705 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.705 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.705 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.705 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.705 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.705 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.706 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.706 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.706 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.706 12:56:43 -- accel/accel.sh@20 -- # val= 00:12:39.706 12:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.706 ************************************ 00:12:39.706 END TEST accel_crc32c_C2 00:12:39.706 ************************************ 00:12:39.706 12:56:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:39.706 12:56:43 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:39.706 12:56:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:39.706 00:12:39.706 real 0m2.563s 00:12:39.706 user 0m2.284s 00:12:39.706 sys 0m0.216s 00:12:39.706 12:56:43 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:39.706 12:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:39.706 12:56:43 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:39.706 12:56:43 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:39.706 12:56:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:39.706 12:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:39.706 ************************************ 00:12:39.706 START TEST accel_copy 00:12:39.706 ************************************ 00:12:39.706 12:56:43 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w copy -y 00:12:39.706 12:56:43 -- accel/accel.sh@16 -- # local accel_opc 00:12:39.706 12:56:43 -- accel/accel.sh@17 -- # local accel_module 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # IFS=: 00:12:39.706 12:56:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:39.706 12:56:43 -- accel/accel.sh@19 -- # read -r var val 00:12:39.706 12:56:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:39.706 12:56:43 -- accel/accel.sh@12 -- # build_accel_config 00:12:39.706 12:56:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:39.706 12:56:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:39.706 12:56:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:39.706 12:56:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:39.706 12:56:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:39.706 12:56:43 -- accel/accel.sh@40 -- # local IFS=, 00:12:39.706 12:56:43 -- accel/accel.sh@41 -- # jq -r . 00:12:39.706 [2024-04-17 12:56:43.794205] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:39.706 [2024-04-17 12:56:43.794537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113506 ] 00:12:39.964 [2024-04-17 12:56:43.951804] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.223 [2024-04-17 12:56:44.181399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=0x1 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=copy 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@23 -- # accel_opc=copy 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=software 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@22 -- # accel_module=software 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=32 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=32 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=1 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val=Yes 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:40.482 12:56:44 -- accel/accel.sh@20 -- # val= 00:12:40.482 12:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # IFS=: 00:12:40.482 12:56:44 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.386 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 ************************************ 00:12:42.386 END TEST accel_copy 00:12:42.386 ************************************ 00:12:42.386 12:56:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:42.386 12:56:46 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:42.386 12:56:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:42.386 00:12:42.386 real 0m2.510s 00:12:42.386 user 0m2.284s 00:12:42.386 sys 0m0.169s 00:12:42.386 12:56:46 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:42.386 12:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:42.386 12:56:46 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:42.386 12:56:46 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:42.386 12:56:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:42.386 12:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:42.386 ************************************ 00:12:42.386 START TEST accel_fill 00:12:42.386 ************************************ 00:12:42.386 12:56:46 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:42.386 12:56:46 -- accel/accel.sh@16 -- # local accel_opc 00:12:42.386 12:56:46 -- accel/accel.sh@17 -- # local accel_module 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.386 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.386 12:56:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:42.386 12:56:46 -- accel/accel.sh@12 -- # build_accel_config 00:12:42.386 12:56:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:42.386 12:56:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:42.386 12:56:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:42.386 12:56:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:42.386 12:56:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:42.386 12:56:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:42.386 12:56:46 -- accel/accel.sh@40 -- # local IFS=, 00:12:42.386 12:56:46 -- accel/accel.sh@41 -- # jq -r . 00:12:42.386 [2024-04-17 12:56:46.385510] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:42.386 [2024-04-17 12:56:46.385793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113574 ] 00:12:42.671 [2024-04-17 12:56:46.545947] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.671 [2024-04-17 12:56:46.768034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val=0x1 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val=fill 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@23 -- # accel_opc=fill 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val=0x80 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.930 12:56:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.930 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.930 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val=software 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@22 -- # accel_module=software 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val=64 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val=64 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val=1 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val=Yes 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:42.931 12:56:46 -- accel/accel.sh@20 -- # val= 00:12:42.931 12:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # IFS=: 00:12:42.931 12:56:46 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@20 -- # val= 00:12:44.834 12:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 ************************************ 00:12:44.834 END TEST accel_fill 00:12:44.834 ************************************ 00:12:44.834 12:56:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:44.834 12:56:48 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:44.834 12:56:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:44.834 00:12:44.834 real 0m2.498s 00:12:44.834 user 0m2.249s 00:12:44.834 sys 0m0.180s 00:12:44.834 12:56:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:44.834 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:12:44.834 12:56:48 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:44.834 12:56:48 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:44.834 12:56:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:44.834 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:12:44.834 ************************************ 00:12:44.834 START TEST accel_copy_crc32c 00:12:44.834 ************************************ 00:12:44.834 12:56:48 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w copy_crc32c -y 00:12:44.834 12:56:48 -- accel/accel.sh@16 -- # local accel_opc 00:12:44.834 12:56:48 -- accel/accel.sh@17 -- # local accel_module 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # IFS=: 00:12:44.834 12:56:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:44.834 12:56:48 -- accel/accel.sh@19 -- # read -r var val 00:12:44.834 12:56:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:44.834 12:56:48 -- accel/accel.sh@12 -- # build_accel_config 00:12:44.834 12:56:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.834 12:56:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.834 12:56:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.834 12:56:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.834 12:56:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.834 12:56:48 -- accel/accel.sh@40 -- # local IFS=, 00:12:44.834 12:56:48 -- accel/accel.sh@41 -- # jq -r . 00:12:44.834 [2024-04-17 12:56:48.965279] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:44.834 [2024-04-17 12:56:48.965619] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113648 ] 00:12:45.092 [2024-04-17 12:56:49.123018] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.350 [2024-04-17 12:56:49.355255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=0x1 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=0 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=software 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@22 -- # accel_module=software 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=32 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=32 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=1 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val=Yes 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:45.609 12:56:49 -- accel/accel.sh@20 -- # val= 00:12:45.609 12:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # IFS=: 00:12:45.609 12:56:49 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@20 -- # val= 00:12:47.510 12:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 ************************************ 00:12:47.510 END TEST accel_copy_crc32c 00:12:47.510 ************************************ 00:12:47.510 12:56:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:47.510 12:56:51 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:47.510 12:56:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:47.510 00:12:47.510 real 0m2.512s 00:12:47.510 user 0m2.235s 00:12:47.510 sys 0m0.206s 00:12:47.510 12:56:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:47.510 12:56:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.510 12:56:51 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:47.510 12:56:51 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:12:47.510 12:56:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:47.510 12:56:51 -- common/autotest_common.sh@10 -- # set +x 00:12:47.510 ************************************ 00:12:47.510 START TEST accel_copy_crc32c_C2 00:12:47.510 ************************************ 00:12:47.510 12:56:51 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:47.510 12:56:51 -- accel/accel.sh@16 -- # local accel_opc 00:12:47.510 12:56:51 -- accel/accel.sh@17 -- # local accel_module 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # IFS=: 00:12:47.510 12:56:51 -- accel/accel.sh@19 -- # read -r var val 00:12:47.510 12:56:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:47.510 12:56:51 -- accel/accel.sh@12 -- # build_accel_config 00:12:47.510 12:56:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:47.510 12:56:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.510 12:56:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.510 12:56:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.510 12:56:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.510 12:56:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.510 12:56:51 -- accel/accel.sh@40 -- # local IFS=, 00:12:47.510 12:56:51 -- accel/accel.sh@41 -- # jq -r . 00:12:47.510 [2024-04-17 12:56:51.554766] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:47.510 [2024-04-17 12:56:51.555157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113710 ] 00:12:47.810 [2024-04-17 12:56:51.725705] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.067 [2024-04-17 12:56:52.066939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=0x1 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=0 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=software 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@22 -- # accel_module=software 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=32 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=32 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=1 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val=Yes 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:48.325 12:56:52 -- accel/accel.sh@20 -- # val= 00:12:48.325 12:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # IFS=: 00:12:48.325 12:56:52 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:50.229 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 ************************************ 00:12:50.229 END TEST accel_copy_crc32c_C2 00:12:50.229 ************************************ 00:12:50.229 12:56:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.229 12:56:54 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:50.229 12:56:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.229 00:12:50.229 real 0m2.663s 00:12:50.229 user 0m2.392s 00:12:50.229 sys 0m0.202s 00:12:50.229 12:56:54 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:50.229 12:56:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.229 12:56:54 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:50.229 12:56:54 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:50.229 12:56:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:50.229 12:56:54 -- common/autotest_common.sh@10 -- # set +x 00:12:50.229 ************************************ 00:12:50.229 START TEST accel_dualcast 00:12:50.229 ************************************ 00:12:50.229 12:56:54 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w dualcast -y 00:12:50.229 12:56:54 -- accel/accel.sh@16 -- # local accel_opc 00:12:50.229 12:56:54 -- accel/accel.sh@17 -- # local accel_module 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:50.229 12:56:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:50.229 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:50.229 12:56:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:50.229 12:56:54 -- accel/accel.sh@12 -- # build_accel_config 00:12:50.229 12:56:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.229 12:56:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.229 12:56:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.229 12:56:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.229 12:56:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.229 12:56:54 -- accel/accel.sh@40 -- # local IFS=, 00:12:50.229 12:56:54 -- accel/accel.sh@41 -- # jq -r . 00:12:50.229 [2024-04-17 12:56:54.288482] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:50.229 [2024-04-17 12:56:54.288809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113774 ] 00:12:50.486 [2024-04-17 12:56:54.453623] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.745 [2024-04-17 12:56:54.683176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=0x1 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=dualcast 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=software 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@22 -- # accel_module=software 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=32 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=32 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=1 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val=Yes 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:51.003 12:56:54 -- accel/accel.sh@20 -- # val= 00:12:51.003 12:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # IFS=: 00:12:51.003 12:56:54 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@20 -- # val= 00:12:52.904 12:56:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 ************************************ 00:12:52.904 END TEST accel_dualcast 00:12:52.904 ************************************ 00:12:52.904 12:56:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:52.904 12:56:56 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:52.904 12:56:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:52.904 00:12:52.904 real 0m2.535s 00:12:52.904 user 0m2.265s 00:12:52.904 sys 0m0.196s 00:12:52.904 12:56:56 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:52.904 12:56:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.904 12:56:56 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:52.904 12:56:56 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:52.904 12:56:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:52.904 12:56:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.904 ************************************ 00:12:52.904 START TEST accel_compare 00:12:52.904 ************************************ 00:12:52.904 12:56:56 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w compare -y 00:12:52.904 12:56:56 -- accel/accel.sh@16 -- # local accel_opc 00:12:52.904 12:56:56 -- accel/accel.sh@17 -- # local accel_module 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # IFS=: 00:12:52.904 12:56:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:52.904 12:56:56 -- accel/accel.sh@19 -- # read -r var val 00:12:52.904 12:56:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:52.904 12:56:56 -- accel/accel.sh@12 -- # build_accel_config 00:12:52.904 12:56:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:52.904 12:56:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:52.904 12:56:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:52.904 12:56:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:52.904 12:56:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:52.904 12:56:56 -- accel/accel.sh@40 -- # local IFS=, 00:12:52.904 12:56:56 -- accel/accel.sh@41 -- # jq -r . 00:12:52.904 [2024-04-17 12:56:56.910554] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:52.904 [2024-04-17 12:56:56.910936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113832 ] 00:12:53.161 [2024-04-17 12:56:57.081568] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.419 [2024-04-17 12:56:57.311231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.419 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.419 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.419 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.419 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.419 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.419 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.419 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.419 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.419 12:56:57 -- accel/accel.sh@20 -- # val=0x1 00:12:53.419 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.419 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=compare 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=software 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@22 -- # accel_module=software 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=32 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=32 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=1 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val=Yes 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:53.420 12:56:57 -- accel/accel.sh@20 -- # val= 00:12:53.420 12:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # IFS=: 00:12:53.420 12:56:57 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 12:56:59 -- accel/accel.sh@20 -- # val= 00:12:55.320 12:56:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.320 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.320 ************************************ 00:12:55.320 END TEST accel_compare 00:12:55.320 ************************************ 00:12:55.320 12:56:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:55.320 12:56:59 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:55.320 12:56:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:55.320 00:12:55.320 real 0m2.540s 00:12:55.320 user 0m2.305s 00:12:55.320 sys 0m0.162s 00:12:55.320 12:56:59 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:55.320 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.320 12:56:59 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:55.320 12:56:59 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:55.320 12:56:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:55.320 12:56:59 -- common/autotest_common.sh@10 -- # set +x 00:12:55.578 ************************************ 00:12:55.578 START TEST accel_xor 00:12:55.578 ************************************ 00:12:55.578 12:56:59 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w xor -y 00:12:55.578 12:56:59 -- accel/accel.sh@16 -- # local accel_opc 00:12:55.578 12:56:59 -- accel/accel.sh@17 -- # local accel_module 00:12:55.578 12:56:59 -- accel/accel.sh@19 -- # IFS=: 00:12:55.578 12:56:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:55.578 12:56:59 -- accel/accel.sh@19 -- # read -r var val 00:12:55.578 12:56:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:55.578 12:56:59 -- accel/accel.sh@12 -- # build_accel_config 00:12:55.578 12:56:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:55.578 12:56:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:55.578 12:56:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:55.578 12:56:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:55.578 12:56:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:55.578 12:56:59 -- accel/accel.sh@40 -- # local IFS=, 00:12:55.578 12:56:59 -- accel/accel.sh@41 -- # jq -r . 00:12:55.578 [2024-04-17 12:56:59.526751] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:55.578 [2024-04-17 12:56:59.527157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113914 ] 00:12:55.578 [2024-04-17 12:56:59.696091] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.837 [2024-04-17 12:56:59.915268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=0x1 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=xor 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=2 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=software 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@22 -- # accel_module=software 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=32 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=32 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=1 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val=Yes 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:56.095 12:57:00 -- accel/accel.sh@20 -- # val= 00:12:56.095 12:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # IFS=: 00:12:56.095 12:57:00 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@20 -- # val= 00:12:58.026 12:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:01 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:58.026 12:57:01 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:58.026 12:57:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:58.026 00:12:58.026 real 0m2.498s 00:12:58.026 user 0m2.265s 00:12:58.026 sys 0m0.178s 00:12:58.026 12:57:01 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:12:58.026 12:57:01 -- common/autotest_common.sh@10 -- # set +x 00:12:58.026 ************************************ 00:12:58.026 END TEST accel_xor 00:12:58.026 ************************************ 00:12:58.026 12:57:02 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:58.026 12:57:02 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:12:58.026 12:57:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:58.026 12:57:02 -- common/autotest_common.sh@10 -- # set +x 00:12:58.026 ************************************ 00:12:58.026 START TEST accel_xor 00:12:58.026 ************************************ 00:12:58.026 12:57:02 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w xor -y -x 3 00:12:58.026 12:57:02 -- accel/accel.sh@16 -- # local accel_opc 00:12:58.026 12:57:02 -- accel/accel.sh@17 -- # local accel_module 00:12:58.026 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.026 12:57:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:58.026 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.026 12:57:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:58.026 12:57:02 -- accel/accel.sh@12 -- # build_accel_config 00:12:58.026 12:57:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:58.026 12:57:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:58.026 12:57:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:58.026 12:57:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:58.026 12:57:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:58.026 12:57:02 -- accel/accel.sh@40 -- # local IFS=, 00:12:58.026 12:57:02 -- accel/accel.sh@41 -- # jq -r . 00:12:58.026 [2024-04-17 12:57:02.102005] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:12:58.026 [2024-04-17 12:57:02.102379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113974 ] 00:12:58.284 [2024-04-17 12:57:02.272683] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.542 [2024-04-17 12:57:02.495130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=0x1 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=xor 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=3 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=software 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@22 -- # accel_module=software 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=32 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=32 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=1 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val=Yes 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:12:58.801 12:57:02 -- accel/accel.sh@20 -- # val= 00:12:58.801 12:57:02 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # IFS=: 00:12:58.801 12:57:02 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@20 -- # val= 00:13:00.783 12:57:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 ************************************ 00:13:00.783 END TEST accel_xor 00:13:00.783 ************************************ 00:13:00.783 12:57:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.783 12:57:04 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:00.783 12:57:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.783 00:13:00.783 real 0m2.492s 00:13:00.783 user 0m2.236s 00:13:00.783 sys 0m0.179s 00:13:00.783 12:57:04 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:00.783 12:57:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.783 12:57:04 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:00.783 12:57:04 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:13:00.783 12:57:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:00.783 12:57:04 -- common/autotest_common.sh@10 -- # set +x 00:13:00.783 ************************************ 00:13:00.783 START TEST accel_dif_verify 00:13:00.783 ************************************ 00:13:00.783 12:57:04 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w dif_verify 00:13:00.783 12:57:04 -- accel/accel.sh@16 -- # local accel_opc 00:13:00.783 12:57:04 -- accel/accel.sh@17 -- # local accel_module 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # IFS=: 00:13:00.783 12:57:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:00.783 12:57:04 -- accel/accel.sh@19 -- # read -r var val 00:13:00.783 12:57:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:00.783 12:57:04 -- accel/accel.sh@12 -- # build_accel_config 00:13:00.783 12:57:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.783 12:57:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.783 12:57:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.783 12:57:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.783 12:57:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.783 12:57:04 -- accel/accel.sh@40 -- # local IFS=, 00:13:00.783 12:57:04 -- accel/accel.sh@41 -- # jq -r . 00:13:00.783 [2024-04-17 12:57:04.663105] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:00.783 [2024-04-17 12:57:04.663461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114035 ] 00:13:00.783 [2024-04-17 12:57:04.833098] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.050 [2024-04-17 12:57:05.052307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=0x1 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=dif_verify 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val='512 bytes' 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val='8 bytes' 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=software 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@22 -- # accel_module=software 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=32 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=32 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=1 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val=No 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:01.321 12:57:05 -- accel/accel.sh@20 -- # val= 00:13:01.321 12:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # IFS=: 00:13:01.321 12:57:05 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.269 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 ************************************ 00:13:03.269 END TEST accel_dif_verify 00:13:03.269 ************************************ 00:13:03.269 12:57:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.269 12:57:07 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:03.269 12:57:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.269 00:13:03.269 real 0m2.478s 00:13:03.269 user 0m2.228s 00:13:03.269 sys 0m0.187s 00:13:03.269 12:57:07 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:03.269 12:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:03.269 12:57:07 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:03.269 12:57:07 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:13:03.269 12:57:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:03.269 12:57:07 -- common/autotest_common.sh@10 -- # set +x 00:13:03.269 ************************************ 00:13:03.269 START TEST accel_dif_generate 00:13:03.269 ************************************ 00:13:03.269 12:57:07 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w dif_generate 00:13:03.269 12:57:07 -- accel/accel.sh@16 -- # local accel_opc 00:13:03.269 12:57:07 -- accel/accel.sh@17 -- # local accel_module 00:13:03.269 12:57:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.269 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.269 12:57:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:03.269 12:57:07 -- accel/accel.sh@12 -- # build_accel_config 00:13:03.269 12:57:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.269 12:57:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.269 12:57:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.269 12:57:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.269 12:57:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.269 12:57:07 -- accel/accel.sh@40 -- # local IFS=, 00:13:03.269 12:57:07 -- accel/accel.sh@41 -- # jq -r . 00:13:03.269 [2024-04-17 12:57:07.224884] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:03.269 [2024-04-17 12:57:07.225289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114090 ] 00:13:03.269 [2024-04-17 12:57:07.389378] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.528 [2024-04-17 12:57:07.618950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=0x1 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=dif_generate 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val='512 bytes' 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val='8 bytes' 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=software 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@22 -- # accel_module=software 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=32 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=32 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=1 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val=No 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:03.787 12:57:07 -- accel/accel.sh@20 -- # val= 00:13:03.787 12:57:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # IFS=: 00:13:03.787 12:57:07 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@20 -- # val= 00:13:05.688 12:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 ************************************ 00:13:05.688 END TEST accel_dif_generate 00:13:05.688 ************************************ 00:13:05.688 12:57:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.688 12:57:09 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:05.688 12:57:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.688 00:13:05.688 real 0m2.493s 00:13:05.688 user 0m2.249s 00:13:05.688 sys 0m0.170s 00:13:05.688 12:57:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:05.688 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:05.688 12:57:09 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:05.688 12:57:09 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:13:05.688 12:57:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:05.688 12:57:09 -- common/autotest_common.sh@10 -- # set +x 00:13:05.688 ************************************ 00:13:05.688 START TEST accel_dif_generate_copy 00:13:05.688 ************************************ 00:13:05.688 12:57:09 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w dif_generate_copy 00:13:05.688 12:57:09 -- accel/accel.sh@16 -- # local accel_opc 00:13:05.688 12:57:09 -- accel/accel.sh@17 -- # local accel_module 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # IFS=: 00:13:05.688 12:57:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:05.688 12:57:09 -- accel/accel.sh@19 -- # read -r var val 00:13:05.688 12:57:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:05.688 12:57:09 -- accel/accel.sh@12 -- # build_accel_config 00:13:05.688 12:57:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.688 12:57:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.688 12:57:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.688 12:57:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.688 12:57:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.688 12:57:09 -- accel/accel.sh@40 -- # local IFS=, 00:13:05.688 12:57:09 -- accel/accel.sh@41 -- # jq -r . 00:13:05.688 [2024-04-17 12:57:09.801622] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:05.688 [2024-04-17 12:57:09.802009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114171 ] 00:13:05.960 [2024-04-17 12:57:09.972721] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.243 [2024-04-17 12:57:10.189720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=0x1 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=software 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@22 -- # accel_module=software 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=32 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=32 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.501 12:57:10 -- accel/accel.sh@20 -- # val=1 00:13:06.501 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.501 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.502 12:57:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.502 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.502 12:57:10 -- accel/accel.sh@20 -- # val=No 00:13:06.502 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.502 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.502 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:06.502 12:57:10 -- accel/accel.sh@20 -- # val= 00:13:06.502 12:57:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # IFS=: 00:13:06.502 12:57:10 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.411 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 ************************************ 00:13:08.411 END TEST accel_dif_generate_copy 00:13:08.411 ************************************ 00:13:08.411 12:57:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.411 12:57:12 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:08.411 12:57:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.411 00:13:08.411 real 0m2.496s 00:13:08.411 user 0m2.240s 00:13:08.411 sys 0m0.180s 00:13:08.411 12:57:12 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:08.411 12:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:08.411 12:57:12 -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:08.411 12:57:12 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.411 12:57:12 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:13:08.411 12:57:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:08.411 12:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:08.411 ************************************ 00:13:08.411 START TEST accel_comp 00:13:08.411 ************************************ 00:13:08.411 12:57:12 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.411 12:57:12 -- accel/accel.sh@16 -- # local accel_opc 00:13:08.411 12:57:12 -- accel/accel.sh@17 -- # local accel_module 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.411 12:57:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.411 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.411 12:57:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.411 12:57:12 -- accel/accel.sh@12 -- # build_accel_config 00:13:08.411 12:57:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.411 12:57:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.411 12:57:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.411 12:57:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.411 12:57:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.411 12:57:12 -- accel/accel.sh@40 -- # local IFS=, 00:13:08.411 12:57:12 -- accel/accel.sh@41 -- # jq -r . 00:13:08.411 [2024-04-17 12:57:12.369231] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:08.411 [2024-04-17 12:57:12.369592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114232 ] 00:13:08.411 [2024-04-17 12:57:12.538339] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.670 [2024-04-17 12:57:12.761708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=0x1 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=compress 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@23 -- # accel_opc=compress 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=software 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@22 -- # accel_module=software 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=32 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=32 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=1 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val=No 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:08.929 12:57:12 -- accel/accel.sh@20 -- # val= 00:13:08.929 12:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # IFS=: 00:13:08.929 12:57:12 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@20 -- # val= 00:13:10.829 12:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 ************************************ 00:13:10.829 END TEST accel_comp 00:13:10.829 ************************************ 00:13:10.829 12:57:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:10.829 12:57:14 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:10.829 12:57:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:10.829 00:13:10.829 real 0m2.514s 00:13:10.829 user 0m2.256s 00:13:10.829 sys 0m0.192s 00:13:10.829 12:57:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:10.829 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:13:10.829 12:57:14 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.829 12:57:14 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:13:10.829 12:57:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:10.829 12:57:14 -- common/autotest_common.sh@10 -- # set +x 00:13:10.829 ************************************ 00:13:10.829 START TEST accel_decomp 00:13:10.829 ************************************ 00:13:10.829 12:57:14 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.829 12:57:14 -- accel/accel.sh@16 -- # local accel_opc 00:13:10.829 12:57:14 -- accel/accel.sh@17 -- # local accel_module 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # IFS=: 00:13:10.829 12:57:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.829 12:57:14 -- accel/accel.sh@19 -- # read -r var val 00:13:10.829 12:57:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:10.829 12:57:14 -- accel/accel.sh@12 -- # build_accel_config 00:13:10.829 12:57:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.829 12:57:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.829 12:57:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.829 12:57:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.829 12:57:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.829 12:57:14 -- accel/accel.sh@40 -- # local IFS=, 00:13:10.829 12:57:14 -- accel/accel.sh@41 -- # jq -r . 00:13:10.829 [2024-04-17 12:57:14.962139] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:10.829 [2024-04-17 12:57:14.962488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114294 ] 00:13:11.087 [2024-04-17 12:57:15.140242] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.345 [2024-04-17 12:57:15.370967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=0x1 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=decompress 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=software 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@22 -- # accel_module=software 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=32 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=32 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=1 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val=Yes 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:11.620 12:57:15 -- accel/accel.sh@20 -- # val= 00:13:11.620 12:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # IFS=: 00:13:11.620 12:57:15 -- accel/accel.sh@19 -- # read -r var val 00:13:13.546 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.546 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.546 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.546 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.546 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.546 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.546 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.546 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.546 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.546 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.546 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.547 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.547 12:57:17 -- accel/accel.sh@20 -- # val= 00:13:13.547 12:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.547 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.547 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.547 ************************************ 00:13:13.547 END TEST accel_decomp 00:13:13.547 ************************************ 00:13:13.547 12:57:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.547 12:57:17 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:13.547 12:57:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.547 00:13:13.547 real 0m2.589s 00:13:13.547 user 0m2.328s 00:13:13.547 sys 0m0.185s 00:13:13.547 12:57:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:13.547 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:13:13.547 12:57:17 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:13.547 12:57:17 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:13:13.547 12:57:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:13.547 12:57:17 -- common/autotest_common.sh@10 -- # set +x 00:13:13.547 ************************************ 00:13:13.547 START TEST accel_decmop_full 00:13:13.547 ************************************ 00:13:13.547 12:57:17 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:13.547 12:57:17 -- accel/accel.sh@16 -- # local accel_opc 00:13:13.547 12:57:17 -- accel/accel.sh@17 -- # local accel_module 00:13:13.547 12:57:17 -- accel/accel.sh@19 -- # IFS=: 00:13:13.547 12:57:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:13.547 12:57:17 -- accel/accel.sh@19 -- # read -r var val 00:13:13.547 12:57:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:13.547 12:57:17 -- accel/accel.sh@12 -- # build_accel_config 00:13:13.547 12:57:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.547 12:57:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.547 12:57:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.547 12:57:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.547 12:57:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.547 12:57:17 -- accel/accel.sh@40 -- # local IFS=, 00:13:13.547 12:57:17 -- accel/accel.sh@41 -- # jq -r . 00:13:13.547 [2024-04-17 12:57:17.637608] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:13.547 [2024-04-17 12:57:17.637982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114349 ] 00:13:13.805 [2024-04-17 12:57:17.810488] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.063 [2024-04-17 12:57:18.043292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=0x1 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=decompress 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=software 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@22 -- # accel_module=software 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=32 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=32 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=1 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val=Yes 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:14.322 12:57:18 -- accel/accel.sh@20 -- # val= 00:13:14.322 12:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # IFS=: 00:13:14.322 12:57:18 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.250 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 ************************************ 00:13:16.250 END TEST accel_decmop_full 00:13:16.250 ************************************ 00:13:16.250 12:57:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:16.250 12:57:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:16.250 12:57:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:16.250 00:13:16.250 real 0m2.547s 00:13:16.250 user 0m2.292s 00:13:16.250 sys 0m0.185s 00:13:16.250 12:57:20 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:16.250 12:57:20 -- common/autotest_common.sh@10 -- # set +x 00:13:16.250 12:57:20 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:16.250 12:57:20 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:13:16.250 12:57:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:16.250 12:57:20 -- common/autotest_common.sh@10 -- # set +x 00:13:16.250 ************************************ 00:13:16.250 START TEST accel_decomp_mcore 00:13:16.250 ************************************ 00:13:16.250 12:57:20 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:16.250 12:57:20 -- accel/accel.sh@16 -- # local accel_opc 00:13:16.250 12:57:20 -- accel/accel.sh@17 -- # local accel_module 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.250 12:57:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:16.250 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.250 12:57:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:16.250 12:57:20 -- accel/accel.sh@12 -- # build_accel_config 00:13:16.250 12:57:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.250 12:57:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.250 12:57:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.250 12:57:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:16.250 12:57:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.250 12:57:20 -- accel/accel.sh@40 -- # local IFS=, 00:13:16.250 12:57:20 -- accel/accel.sh@41 -- # jq -r . 00:13:16.250 [2024-04-17 12:57:20.263164] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:16.250 [2024-04-17 12:57:20.263465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114431 ] 00:13:16.507 [2024-04-17 12:57:20.440427] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.765 [2024-04-17 12:57:20.669187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.765 [2024-04-17 12:57:20.669309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.765 [2024-04-17 12:57:20.669420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.765 [2024-04-17 12:57:20.669426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=0xf 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=decompress 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=software 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@22 -- # accel_module=software 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=32 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=32 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=1 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val=Yes 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:16.765 12:57:20 -- accel/accel.sh@20 -- # val= 00:13:16.765 12:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # IFS=: 00:13:16.765 12:57:20 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@20 -- # val= 00:13:19.296 12:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 ************************************ 00:13:19.296 END TEST accel_decomp_mcore 00:13:19.296 ************************************ 00:13:19.296 12:57:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:19.296 12:57:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:19.296 12:57:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:19.296 00:13:19.296 real 0m2.610s 00:13:19.296 user 0m7.592s 00:13:19.296 sys 0m0.227s 00:13:19.296 12:57:22 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:19.296 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:19.296 12:57:22 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:19.296 12:57:22 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:13:19.296 12:57:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:19.296 12:57:22 -- common/autotest_common.sh@10 -- # set +x 00:13:19.296 ************************************ 00:13:19.296 START TEST accel_decomp_full_mcore 00:13:19.296 ************************************ 00:13:19.296 12:57:22 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:19.296 12:57:22 -- accel/accel.sh@16 -- # local accel_opc 00:13:19.296 12:57:22 -- accel/accel.sh@17 -- # local accel_module 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # IFS=: 00:13:19.296 12:57:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:19.296 12:57:22 -- accel/accel.sh@19 -- # read -r var val 00:13:19.296 12:57:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:19.296 12:57:22 -- accel/accel.sh@12 -- # build_accel_config 00:13:19.296 12:57:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.296 12:57:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.296 12:57:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.296 12:57:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:19.296 12:57:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.296 12:57:22 -- accel/accel.sh@40 -- # local IFS=, 00:13:19.296 12:57:22 -- accel/accel.sh@41 -- # jq -r . 00:13:19.296 [2024-04-17 12:57:22.955381] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:19.296 [2024-04-17 12:57:22.955722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114496 ] 00:13:19.296 [2024-04-17 12:57:23.134736] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.296 [2024-04-17 12:57:23.369275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.296 [2024-04-17 12:57:23.369403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.296 [2024-04-17 12:57:23.369525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.296 [2024-04-17 12:57:23.369531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=0xf 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=decompress 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=software 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@22 -- # accel_module=software 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=32 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=32 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=1 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val=Yes 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:19.555 12:57:23 -- accel/accel.sh@20 -- # val= 00:13:19.555 12:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # IFS=: 00:13:19.555 12:57:23 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@20 -- # val= 00:13:21.459 12:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 ************************************ 00:13:21.459 END TEST accel_decomp_full_mcore 00:13:21.459 ************************************ 00:13:21.459 12:57:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:21.459 12:57:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:21.459 12:57:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:21.459 00:13:21.459 real 0m2.609s 00:13:21.459 user 0m7.543s 00:13:21.459 sys 0m0.214s 00:13:21.459 12:57:25 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:21.459 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.459 12:57:25 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:21.459 12:57:25 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:13:21.459 12:57:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:21.459 12:57:25 -- common/autotest_common.sh@10 -- # set +x 00:13:21.459 ************************************ 00:13:21.459 START TEST accel_decomp_mthread 00:13:21.459 ************************************ 00:13:21.459 12:57:25 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:21.459 12:57:25 -- accel/accel.sh@16 -- # local accel_opc 00:13:21.459 12:57:25 -- accel/accel.sh@17 -- # local accel_module 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # IFS=: 00:13:21.459 12:57:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:21.459 12:57:25 -- accel/accel.sh@19 -- # read -r var val 00:13:21.459 12:57:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:21.459 12:57:25 -- accel/accel.sh@12 -- # build_accel_config 00:13:21.459 12:57:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:21.459 12:57:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:21.459 12:57:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:21.459 12:57:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:21.459 12:57:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:21.459 12:57:25 -- accel/accel.sh@40 -- # local IFS=, 00:13:21.459 12:57:25 -- accel/accel.sh@41 -- # jq -r . 00:13:21.718 [2024-04-17 12:57:25.634937] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:21.718 [2024-04-17 12:57:25.635197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114560 ] 00:13:21.718 [2024-04-17 12:57:25.795245] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.977 [2024-04-17 12:57:26.026363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val=0x1 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val=decompress 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:22.236 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.236 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.236 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=software 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@22 -- # accel_module=software 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=32 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=32 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=2 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val=Yes 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:22.237 12:57:26 -- accel/accel.sh@20 -- # val= 00:13:22.237 12:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # IFS=: 00:13:22.237 12:57:26 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.141 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.141 12:57:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:24.141 12:57:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:24.141 ************************************ 00:13:24.141 END TEST accel_decomp_mthread 00:13:24.141 ************************************ 00:13:24.141 12:57:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:24.141 00:13:24.141 real 0m2.549s 00:13:24.141 user 0m2.281s 00:13:24.141 sys 0m0.195s 00:13:24.141 12:57:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:24.141 12:57:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.141 12:57:28 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:24.141 12:57:28 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:13:24.141 12:57:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:24.141 12:57:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.141 ************************************ 00:13:24.141 START TEST accel_deomp_full_mthread 00:13:24.141 ************************************ 00:13:24.141 12:57:28 -- common/autotest_common.sh@1099 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:24.141 12:57:28 -- accel/accel.sh@16 -- # local accel_opc 00:13:24.141 12:57:28 -- accel/accel.sh@17 -- # local accel_module 00:13:24.141 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.142 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.142 12:57:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:24.142 12:57:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:24.142 12:57:28 -- accel/accel.sh@12 -- # build_accel_config 00:13:24.142 12:57:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:24.142 12:57:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:24.142 12:57:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:24.142 12:57:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:24.142 12:57:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:24.142 12:57:28 -- accel/accel.sh@40 -- # local IFS=, 00:13:24.142 12:57:28 -- accel/accel.sh@41 -- # jq -r . 00:13:24.142 [2024-04-17 12:57:28.261050] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:24.142 [2024-04-17 12:57:28.261461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114636 ] 00:13:24.400 [2024-04-17 12:57:28.435409] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.659 [2024-04-17 12:57:28.658716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val=0x1 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val=decompress 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.920 12:57:28 -- accel/accel.sh@20 -- # val=software 00:13:24.920 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.920 12:57:28 -- accel/accel.sh@22 -- # accel_module=software 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.920 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val=32 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val=32 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val=2 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val=Yes 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:24.921 12:57:28 -- accel/accel.sh@20 -- # val= 00:13:24.921 12:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # IFS=: 00:13:24.921 12:57:28 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 12:57:30 -- accel/accel.sh@20 -- # val= 00:13:26.872 12:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # IFS=: 00:13:26.872 12:57:30 -- accel/accel.sh@19 -- # read -r var val 00:13:26.872 ************************************ 00:13:26.872 END TEST accel_deomp_full_mthread 00:13:26.872 ************************************ 00:13:26.872 12:57:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:26.872 12:57:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:26.872 12:57:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:26.872 00:13:26.872 real 0m2.605s 00:13:26.872 user 0m2.372s 00:13:26.872 sys 0m0.159s 00:13:26.872 12:57:30 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:26.872 12:57:30 -- common/autotest_common.sh@10 -- # set +x 00:13:26.872 12:57:30 -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:26.872 12:57:30 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:26.872 12:57:30 -- accel/accel.sh@137 -- # build_accel_config 00:13:26.872 12:57:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:26.872 12:57:30 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:13:26.872 12:57:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:26.872 12:57:30 -- common/autotest_common.sh@10 -- # set +x 00:13:26.872 12:57:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:26.872 12:57:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:26.872 12:57:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:26.872 12:57:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:26.872 12:57:30 -- accel/accel.sh@40 -- # local IFS=, 00:13:26.872 12:57:30 -- accel/accel.sh@41 -- # jq -r . 00:13:26.872 ************************************ 00:13:26.872 START TEST accel_dif_functional_tests 00:13:26.872 ************************************ 00:13:26.872 12:57:30 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:26.872 [2024-04-17 12:57:30.979646] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:26.872 [2024-04-17 12:57:30.980132] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114691 ] 00:13:27.131 [2024-04-17 12:57:31.166370] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.389 [2024-04-17 12:57:31.427592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.389 [2024-04-17 12:57:31.427737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.389 [2024-04-17 12:57:31.427744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.648 00:13:27.648 00:13:27.648 CUnit - A unit testing framework for C - Version 2.1-3 00:13:27.648 http://cunit.sourceforge.net/ 00:13:27.648 00:13:27.648 00:13:27.648 Suite: accel_dif 00:13:27.648 Test: verify: DIF generated, GUARD check ...passed 00:13:27.648 Test: verify: DIF generated, APPTAG check ...passed 00:13:27.648 Test: verify: DIF generated, REFTAG check ...passed 00:13:27.648 Test: verify: DIF not generated, GUARD check ...[2024-04-17 12:57:31.759694] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:27.648 [2024-04-17 12:57:31.760026] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:27.648 passed 00:13:27.648 Test: verify: DIF not generated, APPTAG check ...[2024-04-17 12:57:31.760442] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:27.648 [2024-04-17 12:57:31.760653] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:27.648 passed 00:13:27.648 Test: verify: DIF not generated, REFTAG check ...[2024-04-17 12:57:31.761014] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:27.648 [2024-04-17 12:57:31.761189] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:27.648 passed 00:13:27.648 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:27.648 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-17 12:57:31.761754] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:27.648 passed 00:13:27.648 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:27.648 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:27.648 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:27.648 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-17 12:57:31.762811] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:27.648 passed 00:13:27.648 Test: strip: DIF generated, GUARD check ...passed 00:13:27.648 Test: strip: DIF generated, APPTAG check ...passed 00:13:27.648 Test: strip: DIF generated, REFTAG check ...passed 00:13:27.648 Test: strip: DIF not generated, GUARD check ...[2024-04-17 12:57:31.763873] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:27.648 [2024-04-17 12:57:31.764029] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:27.648 passed 00:13:27.648 Test: strip: DIF not generated, APPTAG check ...[2024-04-17 12:57:31.764419] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:27.648 [2024-04-17 12:57:31.764563] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:27.648 passed 00:13:27.648 Test: strip: DIF not generated, REFTAG check ...[2024-04-17 12:57:31.764917] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:27.648 [2024-04-17 12:57:31.765047] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:27.648 passed 00:13:27.648 Test: generate copy: DIF generated, GUARD check ...passed 00:13:27.648 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:27.648 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:27.648 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:27.648 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:27.648 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:27.648 Test: generate copy: iovecs-len validate ...[2024-04-17 12:57:31.766672] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:27.648 passed 00:13:27.648 Test: generate copy: buffer alignment validate ...passed 00:13:27.648 00:13:27.648 Run Summary: Type Total Ran Passed Failed Inactive 00:13:27.648 suites 1 1 n/a 0 0 00:13:27.648 tests 26 26 26 0 0 00:13:27.648 asserts 285 285 285 0 n/a 00:13:27.648 00:13:27.648 Elapsed time = 0.020 seconds 00:13:29.024 ************************************ 00:13:29.024 END TEST accel_dif_functional_tests 00:13:29.024 ************************************ 00:13:29.024 00:13:29.024 real 0m2.010s 00:13:29.024 user 0m3.824s 00:13:29.024 sys 0m0.269s 00:13:29.024 12:57:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:29.024 12:57:32 -- common/autotest_common.sh@10 -- # set +x 00:13:29.024 ************************************ 00:13:29.024 END TEST accel 00:13:29.024 ************************************ 00:13:29.024 00:13:29.024 real 1m1.731s 00:13:29.024 user 1m7.376s 00:13:29.024 sys 0m5.914s 00:13:29.024 12:57:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:29.024 12:57:32 -- common/autotest_common.sh@10 -- # set +x 00:13:29.024 12:57:32 -- spdk/autotest.sh@179 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:29.024 12:57:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:29.024 12:57:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:29.024 12:57:32 -- common/autotest_common.sh@10 -- # set +x 00:13:29.024 ************************************ 00:13:29.024 START TEST accel_rpc 00:13:29.024 ************************************ 00:13:29.024 12:57:33 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:29.024 * Looking for test storage... 00:13:29.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:29.024 12:57:33 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:29.024 12:57:33 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=114795 00:13:29.024 12:57:33 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:29.024 12:57:33 -- accel/accel_rpc.sh@15 -- # waitforlisten 114795 00:13:29.024 12:57:33 -- common/autotest_common.sh@817 -- # '[' -z 114795 ']' 00:13:29.024 12:57:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.024 12:57:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:29.024 12:57:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.024 12:57:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:29.024 12:57:33 -- common/autotest_common.sh@10 -- # set +x 00:13:29.024 [2024-04-17 12:57:33.152431] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:29.024 [2024-04-17 12:57:33.152802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114795 ] 00:13:29.282 [2024-04-17 12:57:33.312875] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.540 [2024-04-17 12:57:33.528585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.122 12:57:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:30.122 12:57:34 -- common/autotest_common.sh@850 -- # return 0 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:30.122 12:57:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:30.122 12:57:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:30.122 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:30.122 ************************************ 00:13:30.122 START TEST accel_assign_opcode 00:13:30.122 ************************************ 00:13:30.122 12:57:34 -- common/autotest_common.sh@1099 -- # accel_assign_opcode_test_suite 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:30.122 12:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.122 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:30.122 [2024-04-17 12:57:34.197651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:30.122 12:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:30.122 12:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.122 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:30.122 [2024-04-17 12:57:34.205613] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:30.122 12:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:30.122 12:57:34 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:30.122 12:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.122 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:31.058 12:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.058 12:57:34 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:31.058 12:57:34 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:31.058 12:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.058 12:57:34 -- accel/accel_rpc.sh@42 -- # grep software 00:13:31.058 12:57:34 -- common/autotest_common.sh@10 -- # set +x 00:13:31.058 12:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.058 software 00:13:31.058 ************************************ 00:13:31.058 END TEST accel_assign_opcode 00:13:31.058 ************************************ 00:13:31.058 00:13:31.058 real 0m0.841s 00:13:31.058 user 0m0.056s 00:13:31.058 sys 0m0.013s 00:13:31.058 12:57:35 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:31.058 12:57:35 -- common/autotest_common.sh@10 -- # set +x 00:13:31.058 12:57:35 -- accel/accel_rpc.sh@55 -- # killprocess 114795 00:13:31.058 12:57:35 -- common/autotest_common.sh@924 -- # '[' -z 114795 ']' 00:13:31.058 12:57:35 -- common/autotest_common.sh@928 -- # kill -0 114795 00:13:31.058 12:57:35 -- common/autotest_common.sh@929 -- # uname 00:13:31.058 12:57:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:31.058 12:57:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 114795 00:13:31.058 killing process with pid 114795 00:13:31.058 12:57:35 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:31.058 12:57:35 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:31.058 12:57:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 114795' 00:13:31.058 12:57:35 -- common/autotest_common.sh@943 -- # kill 114795 00:13:31.058 12:57:35 -- common/autotest_common.sh@948 -- # wait 114795 00:13:33.606 ************************************ 00:13:33.606 END TEST accel_rpc 00:13:33.606 ************************************ 00:13:33.606 00:13:33.606 real 0m4.256s 00:13:33.606 user 0m4.315s 00:13:33.606 sys 0m0.547s 00:13:33.606 12:57:37 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:33.606 12:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:33.606 12:57:37 -- spdk/autotest.sh@180 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:33.606 12:57:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:33.606 12:57:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:33.606 12:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:33.606 ************************************ 00:13:33.606 START TEST app_cmdline 00:13:33.606 ************************************ 00:13:33.606 12:57:37 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:33.606 * Looking for test storage... 00:13:33.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:33.606 12:57:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:33.606 12:57:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=114938 00:13:33.606 12:57:37 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:33.606 12:57:37 -- app/cmdline.sh@18 -- # waitforlisten 114938 00:13:33.606 12:57:37 -- common/autotest_common.sh@817 -- # '[' -z 114938 ']' 00:13:33.606 12:57:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.606 12:57:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:33.606 12:57:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.606 12:57:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:33.606 12:57:37 -- common/autotest_common.sh@10 -- # set +x 00:13:33.606 [2024-04-17 12:57:37.475204] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:33.606 [2024-04-17 12:57:37.475581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114938 ] 00:13:33.606 [2024-04-17 12:57:37.632372] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.891 [2024-04-17 12:57:37.841601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.829 12:57:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:34.829 12:57:38 -- common/autotest_common.sh@850 -- # return 0 00:13:34.829 12:57:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:34.829 { 00:13:34.829 "version": "SPDK v24.05-pre git sha1 2b97e37d6", 00:13:34.829 "fields": { 00:13:34.829 "major": 24, 00:13:34.829 "minor": 5, 00:13:34.829 "patch": 0, 00:13:34.829 "suffix": "-pre", 00:13:34.829 "commit": "2b97e37d6" 00:13:34.829 } 00:13:34.829 } 00:13:34.829 12:57:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:34.829 12:57:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:34.829 12:57:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:34.829 12:57:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:34.829 12:57:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:34.829 12:57:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:34.829 12:57:38 -- app/cmdline.sh@26 -- # sort 00:13:34.829 12:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.829 12:57:38 -- common/autotest_common.sh@10 -- # set +x 00:13:34.829 12:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:35.088 12:57:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:35.088 12:57:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:35.088 12:57:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:35.088 12:57:38 -- common/autotest_common.sh@638 -- # local es=0 00:13:35.088 12:57:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:35.088 12:57:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.088 12:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:35.088 12:57:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.088 12:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:35.088 12:57:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.088 12:57:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:35.088 12:57:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.088 12:57:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:35.088 12:57:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:35.347 request: 00:13:35.347 { 00:13:35.347 "method": "env_dpdk_get_mem_stats", 00:13:35.347 "req_id": 1 00:13:35.347 } 00:13:35.347 Got JSON-RPC error response 00:13:35.347 response: 00:13:35.347 { 00:13:35.347 "code": -32601, 00:13:35.347 "message": "Method not found" 00:13:35.347 } 00:13:35.347 12:57:39 -- common/autotest_common.sh@641 -- # es=1 00:13:35.347 12:57:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:35.347 12:57:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:35.347 12:57:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:35.348 12:57:39 -- app/cmdline.sh@1 -- # killprocess 114938 00:13:35.348 12:57:39 -- common/autotest_common.sh@924 -- # '[' -z 114938 ']' 00:13:35.348 12:57:39 -- common/autotest_common.sh@928 -- # kill -0 114938 00:13:35.348 12:57:39 -- common/autotest_common.sh@929 -- # uname 00:13:35.348 12:57:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:35.348 12:57:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 114938 00:13:35.348 killing process with pid 114938 00:13:35.348 12:57:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:35.348 12:57:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:35.348 12:57:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 114938' 00:13:35.348 12:57:39 -- common/autotest_common.sh@943 -- # kill 114938 00:13:35.348 12:57:39 -- common/autotest_common.sh@948 -- # wait 114938 00:13:37.882 ************************************ 00:13:37.882 END TEST app_cmdline 00:13:37.882 ************************************ 00:13:37.882 00:13:37.882 real 0m4.144s 00:13:37.882 user 0m4.629s 00:13:37.882 sys 0m0.542s 00:13:37.882 12:57:41 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:37.882 12:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 12:57:41 -- spdk/autotest.sh@181 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:37.882 12:57:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:37.882 12:57:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:37.882 12:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 ************************************ 00:13:37.882 START TEST version 00:13:37.882 ************************************ 00:13:37.882 12:57:41 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:37.882 * Looking for test storage... 00:13:37.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:37.882 12:57:41 -- app/version.sh@17 -- # get_header_version major 00:13:37.882 12:57:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:37.882 12:57:41 -- app/version.sh@14 -- # cut -f2 00:13:37.882 12:57:41 -- app/version.sh@14 -- # tr -d '"' 00:13:37.882 12:57:41 -- app/version.sh@17 -- # major=24 00:13:37.882 12:57:41 -- app/version.sh@18 -- # get_header_version minor 00:13:37.882 12:57:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:37.882 12:57:41 -- app/version.sh@14 -- # cut -f2 00:13:37.882 12:57:41 -- app/version.sh@14 -- # tr -d '"' 00:13:37.882 12:57:41 -- app/version.sh@18 -- # minor=5 00:13:37.882 12:57:41 -- app/version.sh@19 -- # get_header_version patch 00:13:37.882 12:57:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:37.882 12:57:41 -- app/version.sh@14 -- # cut -f2 00:13:37.882 12:57:41 -- app/version.sh@14 -- # tr -d '"' 00:13:37.882 12:57:41 -- app/version.sh@19 -- # patch=0 00:13:37.882 12:57:41 -- app/version.sh@20 -- # get_header_version suffix 00:13:37.882 12:57:41 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:37.882 12:57:41 -- app/version.sh@14 -- # cut -f2 00:13:37.882 12:57:41 -- app/version.sh@14 -- # tr -d '"' 00:13:37.882 12:57:41 -- app/version.sh@20 -- # suffix=-pre 00:13:37.882 12:57:41 -- app/version.sh@22 -- # version=24.5 00:13:37.882 12:57:41 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:37.882 12:57:41 -- app/version.sh@28 -- # version=24.5rc0 00:13:37.882 12:57:41 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:37.882 12:57:41 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:37.882 12:57:41 -- app/version.sh@30 -- # py_version=24.5rc0 00:13:37.882 12:57:41 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:37.882 00:13:37.882 real 0m0.138s 00:13:37.882 user 0m0.119s 00:13:37.882 sys 0m0.048s 00:13:37.882 ************************************ 00:13:37.882 END TEST version 00:13:37.882 ************************************ 00:13:37.882 12:57:41 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:37.882 12:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 12:57:41 -- spdk/autotest.sh@183 -- # '[' 1 -eq 1 ']' 00:13:37.882 12:57:41 -- spdk/autotest.sh@184 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:37.882 12:57:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:37.882 12:57:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:37.882 12:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 ************************************ 00:13:37.882 START TEST blockdev_general 00:13:37.882 ************************************ 00:13:37.882 12:57:41 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:37.882 * Looking for test storage... 00:13:37.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:37.882 12:57:41 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:37.882 12:57:41 -- bdev/nbd_common.sh@6 -- # set -e 00:13:37.882 12:57:41 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:37.882 12:57:41 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:37.882 12:57:41 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:37.882 12:57:41 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:37.882 12:57:41 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:37.882 12:57:41 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:37.882 12:57:41 -- bdev/blockdev.sh@20 -- # : 00:13:37.882 12:57:41 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:37.882 12:57:41 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:37.882 12:57:41 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:37.882 12:57:41 -- bdev/blockdev.sh@674 -- # uname -s 00:13:37.882 12:57:41 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:37.882 12:57:41 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:37.882 12:57:41 -- bdev/blockdev.sh@682 -- # test_type=bdev 00:13:37.882 12:57:41 -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:37.882 12:57:41 -- bdev/blockdev.sh@684 -- # dek= 00:13:37.882 12:57:41 -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:37.882 12:57:41 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:37.882 12:57:41 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:37.882 12:57:41 -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:13:37.882 12:57:41 -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:13:37.882 12:57:41 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:37.882 12:57:41 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=115151 00:13:37.882 12:57:41 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:37.882 12:57:41 -- bdev/blockdev.sh@49 -- # waitforlisten 115151 00:13:37.882 12:57:41 -- common/autotest_common.sh@817 -- # '[' -z 115151 ']' 00:13:37.882 12:57:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.882 12:57:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.882 12:57:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.882 12:57:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.882 12:57:41 -- common/autotest_common.sh@10 -- # set +x 00:13:37.882 12:57:41 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:37.882 [2024-04-17 12:57:41.922258] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:37.882 [2024-04-17 12:57:41.922598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115151 ] 00:13:38.140 [2024-04-17 12:57:42.088195] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.399 [2024-04-17 12:57:42.305897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.966 12:57:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:38.966 12:57:42 -- common/autotest_common.sh@850 -- # return 0 00:13:38.966 12:57:42 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:38.966 12:57:42 -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:13:38.966 12:57:42 -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:38.966 12:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:38.966 12:57:42 -- common/autotest_common.sh@10 -- # set +x 00:13:39.903 [2024-04-17 12:57:43.714156] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.903 [2024-04-17 12:57:43.714423] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.903 00:13:39.903 [2024-04-17 12:57:43.722115] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.903 [2024-04-17 12:57:43.722325] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.903 00:13:39.903 Malloc0 00:13:39.903 Malloc1 00:13:39.903 Malloc2 00:13:39.903 Malloc3 00:13:39.903 Malloc4 00:13:39.903 Malloc5 00:13:39.903 Malloc6 00:13:40.161 Malloc7 00:13:40.161 Malloc8 00:13:40.161 Malloc9 00:13:40.161 [2024-04-17 12:57:44.145107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:40.161 [2024-04-17 12:57:44.145326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.161 [2024-04-17 12:57:44.145404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:40.161 [2024-04-17 12:57:44.145628] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.161 [2024-04-17 12:57:44.148264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.161 [2024-04-17 12:57:44.148491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:40.161 TestPT 00:13:40.161 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.161 12:57:44 -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:40.161 5000+0 records in 00:13:40.161 5000+0 records out 00:13:40.161 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0271387 s, 377 MB/s 00:13:40.161 12:57:44 -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:40.161 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.161 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.161 AIO0 00:13:40.161 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.161 12:57:44 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:40.161 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.161 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.161 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.161 12:57:44 -- bdev/blockdev.sh@740 -- # cat 00:13:40.161 12:57:44 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:40.162 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.162 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.162 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.162 12:57:44 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:40.162 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.162 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.421 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.421 12:57:44 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:40.421 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.421 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.421 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.421 12:57:44 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:40.421 12:57:44 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:40.421 12:57:44 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:40.421 12:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.421 12:57:44 -- common/autotest_common.sh@10 -- # set +x 00:13:40.421 12:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.421 12:57:44 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:40.421 12:57:44 -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:40.422 12:57:44 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "172861c7-2cfb-4f81-843a-e0174a981298"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "172861c7-2cfb-4f81-843a-e0174a981298",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f290bd1f-3f1f-5361-86f1-421ed8577b0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f290bd1f-3f1f-5361-86f1-421ed8577b0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "631def9a-23a7-5617-a4ad-6071227a31e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "631def9a-23a7-5617-a4ad-6071227a31e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b3e8b928-22cc-543b-ae6d-8057e7876dd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b3e8b928-22cc-543b-ae6d-8057e7876dd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "013aaae4-377d-5bf2-be0d-a8df1b703934"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "013aaae4-377d-5bf2-be0d-a8df1b703934",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "3d4fbc76-a2d0-57e5-99a4-5434610318a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4fbc76-a2d0-57e5-99a4-5434610318a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7107ab96-53f0-58a0-97e2-a2c50680801b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7107ab96-53f0-58a0-97e2-a2c50680801b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "eb38b391-d210-5c22-bc4d-81c7917acdad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eb38b391-d210-5c22-bc4d-81c7917acdad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "30a79626-f094-5da0-8bc1-15fa8534bfb4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30a79626-f094-5da0-8bc1-15fa8534bfb4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dc69e74f-dfec-5ec9-89a4-14b69076f90c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc69e74f-dfec-5ec9-89a4-14b69076f90c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4de16a82-d902-5fab-af6d-573aa968cf4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4de16a82-d902-5fab-af6d-573aa968cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f85dcb15-d609-416f-9980-df4c67b4a35c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4f9c30e7-88f7-443a-b60c-ebe471bae7ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "92c44a26-3125-4263-81c6-f18ccc4e5db1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7d9bd8ba-05ef-4022-af71-4f926ca49ead"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f3d5379f-4c53-4641-9929-3d2781c7bf68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dc204392-880e-4c3b-9839-0ad929a399b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc5b5885-de6b-4a19-b661-e987b8f1b2f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af0eadb5-75f4-4cf9-be21-de2f06b74068",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "51e42b2b-902d-45d3-af14-027a236fbaaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "245fb0f9-6786-4615-932c-3e54fd1b217a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "245fb0f9-6786-4615-932c-3e54fd1b217a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:40.422 12:57:44 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:40.422 12:57:44 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:13:40.422 12:57:44 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:40.422 12:57:44 -- bdev/blockdev.sh@754 -- # killprocess 115151 00:13:40.422 12:57:44 -- common/autotest_common.sh@924 -- # '[' -z 115151 ']' 00:13:40.422 12:57:44 -- common/autotest_common.sh@928 -- # kill -0 115151 00:13:40.422 12:57:44 -- common/autotest_common.sh@929 -- # uname 00:13:40.422 12:57:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:40.422 12:57:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115151 00:13:40.422 12:57:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:40.422 12:57:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:40.422 12:57:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115151' 00:13:40.422 killing process with pid 115151 00:13:40.422 12:57:44 -- common/autotest_common.sh@943 -- # kill 115151 00:13:40.422 12:57:44 -- common/autotest_common.sh@948 -- # wait 115151 00:13:43.740 12:57:47 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:43.740 12:57:47 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:43.740 12:57:47 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:13:43.740 12:57:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:43.740 12:57:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.740 ************************************ 00:13:43.740 START TEST bdev_hello_world 00:13:43.740 ************************************ 00:13:43.740 12:57:47 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:43.740 [2024-04-17 12:57:47.710935] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:43.740 [2024-04-17 12:57:47.711283] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115252 ] 00:13:43.740 [2024-04-17 12:57:47.870137] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.999 [2024-04-17 12:57:48.079240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.565 [2024-04-17 12:57:48.466605] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:44.565 [2024-04-17 12:57:48.466969] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:44.565 [2024-04-17 12:57:48.474565] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:44.565 [2024-04-17 12:57:48.474750] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:44.565 [2024-04-17 12:57:48.482589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:44.565 [2024-04-17 12:57:48.482788] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:44.565 [2024-04-17 12:57:48.482935] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:44.565 [2024-04-17 12:57:48.686445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:44.565 [2024-04-17 12:57:48.686795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.565 [2024-04-17 12:57:48.686953] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:44.565 [2024-04-17 12:57:48.687105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.565 [2024-04-17 12:57:48.689734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.565 [2024-04-17 12:57:48.689911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:45.133 [2024-04-17 12:57:49.008763] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:45.133 [2024-04-17 12:57:49.009141] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:45.133 [2024-04-17 12:57:49.009435] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:45.133 [2024-04-17 12:57:49.009748] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:45.133 [2024-04-17 12:57:49.010055] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:45.133 [2024-04-17 12:57:49.010270] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:45.133 [2024-04-17 12:57:49.010563] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:45.133 00:13:45.133 [2024-04-17 12:57:49.010801] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:47.037 ************************************ 00:13:47.037 END TEST bdev_hello_world 00:13:47.037 ************************************ 00:13:47.037 00:13:47.037 real 0m3.455s 00:13:47.037 user 0m2.935s 00:13:47.037 sys 0m0.364s 00:13:47.037 12:57:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:47.037 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.037 12:57:51 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:47.037 12:57:51 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:47.037 12:57:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:47.037 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.037 ************************************ 00:13:47.037 START TEST bdev_bounds 00:13:47.037 ************************************ 00:13:47.037 Process bdevio pid: 115336 00:13:47.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.037 12:57:51 -- common/autotest_common.sh@1099 -- # bdev_bounds '' 00:13:47.037 12:57:51 -- bdev/blockdev.sh@290 -- # bdevio_pid=115336 00:13:47.037 12:57:51 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:47.037 12:57:51 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:47.037 12:57:51 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 115336' 00:13:47.037 12:57:51 -- bdev/blockdev.sh@293 -- # waitforlisten 115336 00:13:47.037 12:57:51 -- common/autotest_common.sh@817 -- # '[' -z 115336 ']' 00:13:47.037 12:57:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.037 12:57:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.037 12:57:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.037 12:57:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.037 12:57:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.296 [2024-04-17 12:57:51.264168] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:47.296 [2024-04-17 12:57:51.264516] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115336 ] 00:13:47.296 [2024-04-17 12:57:51.437269] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.554 [2024-04-17 12:57:51.682302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.554 [2024-04-17 12:57:51.682424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.554 [2024-04-17 12:57:51.682440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.122 [2024-04-17 12:57:52.071183] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.122 [2024-04-17 12:57:52.071506] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.122 [2024-04-17 12:57:52.079144] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.122 [2024-04-17 12:57:52.079336] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.122 [2024-04-17 12:57:52.087159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.122 [2024-04-17 12:57:52.087353] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:48.122 [2024-04-17 12:57:52.087481] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:48.380 [2024-04-17 12:57:52.288765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.381 [2024-04-17 12:57:52.289064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.381 [2024-04-17 12:57:52.289239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:48.381 [2024-04-17 12:57:52.289369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.381 [2024-04-17 12:57:52.292296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.381 [2024-04-17 12:57:52.292475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.639 12:57:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.639 12:57:52 -- common/autotest_common.sh@850 -- # return 0 00:13:48.639 12:57:52 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:48.639 I/O targets: 00:13:48.639 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:48.639 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:48.639 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:48.639 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:48.639 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:48.639 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:48.639 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:48.639 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:48.639 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:48.639 00:13:48.639 00:13:48.639 CUnit - A unit testing framework for C - Version 2.1-3 00:13:48.639 http://cunit.sourceforge.net/ 00:13:48.639 00:13:48.639 00:13:48.639 Suite: bdevio tests on: AIO0 00:13:48.639 Test: blockdev write read block ...passed 00:13:48.639 Test: blockdev write zeroes read block ...passed 00:13:48.640 Test: blockdev write zeroes read no split ...passed 00:13:48.898 Test: blockdev write zeroes read split ...passed 00:13:48.898 Test: blockdev write zeroes read split partial ...passed 00:13:48.898 Test: blockdev reset ...passed 00:13:48.898 Test: blockdev write read 8 blocks ...passed 00:13:48.898 Test: blockdev write read size > 128k ...passed 00:13:48.898 Test: blockdev write read invalid size ...passed 00:13:48.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.898 Test: blockdev write read max offset ...passed 00:13:48.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.898 Test: blockdev writev readv 8 blocks ...passed 00:13:48.898 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.898 Test: blockdev writev readv block ...passed 00:13:48.898 Test: blockdev writev readv size > 128k ...passed 00:13:48.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.898 Test: blockdev comparev and writev ...passed 00:13:48.899 Test: blockdev nvme passthru rw ...passed 00:13:48.899 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.899 Test: blockdev nvme admin passthru ...passed 00:13:48.899 Test: blockdev copy ...passed 00:13:48.899 Suite: bdevio tests on: raid1 00:13:48.899 Test: blockdev write read block ...passed 00:13:48.899 Test: blockdev write zeroes read block ...passed 00:13:48.899 Test: blockdev write zeroes read no split ...passed 00:13:48.899 Test: blockdev write zeroes read split ...passed 00:13:48.899 Test: blockdev write zeroes read split partial ...passed 00:13:48.899 Test: blockdev reset ...passed 00:13:48.899 Test: blockdev write read 8 blocks ...passed 00:13:48.899 Test: blockdev write read size > 128k ...passed 00:13:48.899 Test: blockdev write read invalid size ...passed 00:13:48.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.899 Test: blockdev write read max offset ...passed 00:13:48.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.899 Test: blockdev writev readv 8 blocks ...passed 00:13:48.899 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.899 Test: blockdev writev readv block ...passed 00:13:48.899 Test: blockdev writev readv size > 128k ...passed 00:13:48.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.899 Test: blockdev comparev and writev ...passed 00:13:48.899 Test: blockdev nvme passthru rw ...passed 00:13:48.899 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.899 Test: blockdev nvme admin passthru ...passed 00:13:48.899 Test: blockdev copy ...passed 00:13:48.899 Suite: bdevio tests on: concat0 00:13:48.899 Test: blockdev write read block ...passed 00:13:48.899 Test: blockdev write zeroes read block ...passed 00:13:48.899 Test: blockdev write zeroes read no split ...passed 00:13:48.899 Test: blockdev write zeroes read split ...passed 00:13:48.899 Test: blockdev write zeroes read split partial ...passed 00:13:48.899 Test: blockdev reset ...passed 00:13:48.899 Test: blockdev write read 8 blocks ...passed 00:13:48.899 Test: blockdev write read size > 128k ...passed 00:13:48.899 Test: blockdev write read invalid size ...passed 00:13:48.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.899 Test: blockdev write read max offset ...passed 00:13:48.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.899 Test: blockdev writev readv 8 blocks ...passed 00:13:48.899 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.899 Test: blockdev writev readv block ...passed 00:13:48.899 Test: blockdev writev readv size > 128k ...passed 00:13:48.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.899 Test: blockdev comparev and writev ...passed 00:13:48.899 Test: blockdev nvme passthru rw ...passed 00:13:48.899 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.899 Test: blockdev nvme admin passthru ...passed 00:13:48.899 Test: blockdev copy ...passed 00:13:48.899 Suite: bdevio tests on: raid0 00:13:48.899 Test: blockdev write read block ...passed 00:13:48.899 Test: blockdev write zeroes read block ...passed 00:13:48.899 Test: blockdev write zeroes read no split ...passed 00:13:48.899 Test: blockdev write zeroes read split ...passed 00:13:48.899 Test: blockdev write zeroes read split partial ...passed 00:13:48.899 Test: blockdev reset ...passed 00:13:48.899 Test: blockdev write read 8 blocks ...passed 00:13:48.899 Test: blockdev write read size > 128k ...passed 00:13:48.899 Test: blockdev write read invalid size ...passed 00:13:48.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:48.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:48.899 Test: blockdev write read max offset ...passed 00:13:48.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:48.899 Test: blockdev writev readv 8 blocks ...passed 00:13:48.899 Test: blockdev writev readv 30 x 1block ...passed 00:13:48.899 Test: blockdev writev readv block ...passed 00:13:48.899 Test: blockdev writev readv size > 128k ...passed 00:13:48.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:48.899 Test: blockdev comparev and writev ...passed 00:13:48.899 Test: blockdev nvme passthru rw ...passed 00:13:48.899 Test: blockdev nvme passthru vendor specific ...passed 00:13:48.899 Test: blockdev nvme admin passthru ...passed 00:13:48.899 Test: blockdev copy ...passed 00:13:48.899 Suite: bdevio tests on: TestPT 00:13:48.899 Test: blockdev write read block ...passed 00:13:48.899 Test: blockdev write zeroes read block ...passed 00:13:48.899 Test: blockdev write zeroes read no split ...passed 00:13:49.157 Test: blockdev write zeroes read split ...passed 00:13:49.157 Test: blockdev write zeroes read split partial ...passed 00:13:49.157 Test: blockdev reset ...passed 00:13:49.157 Test: blockdev write read 8 blocks ...passed 00:13:49.157 Test: blockdev write read size > 128k ...passed 00:13:49.157 Test: blockdev write read invalid size ...passed 00:13:49.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.157 Test: blockdev write read max offset ...passed 00:13:49.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.157 Test: blockdev writev readv 8 blocks ...passed 00:13:49.157 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.157 Test: blockdev writev readv block ...passed 00:13:49.157 Test: blockdev writev readv size > 128k ...passed 00:13:49.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.157 Test: blockdev comparev and writev ...passed 00:13:49.157 Test: blockdev nvme passthru rw ...passed 00:13:49.157 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.157 Test: blockdev nvme admin passthru ...passed 00:13:49.157 Test: blockdev copy ...passed 00:13:49.157 Suite: bdevio tests on: Malloc2p7 00:13:49.157 Test: blockdev write read block ...passed 00:13:49.157 Test: blockdev write zeroes read block ...passed 00:13:49.157 Test: blockdev write zeroes read no split ...passed 00:13:49.157 Test: blockdev write zeroes read split ...passed 00:13:49.157 Test: blockdev write zeroes read split partial ...passed 00:13:49.157 Test: blockdev reset ...passed 00:13:49.157 Test: blockdev write read 8 blocks ...passed 00:13:49.157 Test: blockdev write read size > 128k ...passed 00:13:49.157 Test: blockdev write read invalid size ...passed 00:13:49.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.157 Test: blockdev write read max offset ...passed 00:13:49.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.157 Test: blockdev writev readv 8 blocks ...passed 00:13:49.157 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.157 Test: blockdev writev readv block ...passed 00:13:49.157 Test: blockdev writev readv size > 128k ...passed 00:13:49.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.157 Test: blockdev comparev and writev ...passed 00:13:49.157 Test: blockdev nvme passthru rw ...passed 00:13:49.157 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.157 Test: blockdev nvme admin passthru ...passed 00:13:49.157 Test: blockdev copy ...passed 00:13:49.157 Suite: bdevio tests on: Malloc2p6 00:13:49.157 Test: blockdev write read block ...passed 00:13:49.157 Test: blockdev write zeroes read block ...passed 00:13:49.157 Test: blockdev write zeroes read no split ...passed 00:13:49.157 Test: blockdev write zeroes read split ...passed 00:13:49.157 Test: blockdev write zeroes read split partial ...passed 00:13:49.157 Test: blockdev reset ...passed 00:13:49.157 Test: blockdev write read 8 blocks ...passed 00:13:49.157 Test: blockdev write read size > 128k ...passed 00:13:49.157 Test: blockdev write read invalid size ...passed 00:13:49.157 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.157 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.157 Test: blockdev write read max offset ...passed 00:13:49.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.157 Test: blockdev writev readv 8 blocks ...passed 00:13:49.157 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.157 Test: blockdev writev readv block ...passed 00:13:49.157 Test: blockdev writev readv size > 128k ...passed 00:13:49.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.157 Test: blockdev comparev and writev ...passed 00:13:49.157 Test: blockdev nvme passthru rw ...passed 00:13:49.157 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.157 Test: blockdev nvme admin passthru ...passed 00:13:49.157 Test: blockdev copy ...passed 00:13:49.157 Suite: bdevio tests on: Malloc2p5 00:13:49.158 Test: blockdev write read block ...passed 00:13:49.158 Test: blockdev write zeroes read block ...passed 00:13:49.158 Test: blockdev write zeroes read no split ...passed 00:13:49.158 Test: blockdev write zeroes read split ...passed 00:13:49.158 Test: blockdev write zeroes read split partial ...passed 00:13:49.158 Test: blockdev reset ...passed 00:13:49.158 Test: blockdev write read 8 blocks ...passed 00:13:49.158 Test: blockdev write read size > 128k ...passed 00:13:49.158 Test: blockdev write read invalid size ...passed 00:13:49.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.158 Test: blockdev write read max offset ...passed 00:13:49.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.158 Test: blockdev writev readv 8 blocks ...passed 00:13:49.158 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.158 Test: blockdev writev readv block ...passed 00:13:49.158 Test: blockdev writev readv size > 128k ...passed 00:13:49.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.158 Test: blockdev comparev and writev ...passed 00:13:49.158 Test: blockdev nvme passthru rw ...passed 00:13:49.158 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.158 Test: blockdev nvme admin passthru ...passed 00:13:49.158 Test: blockdev copy ...passed 00:13:49.158 Suite: bdevio tests on: Malloc2p4 00:13:49.158 Test: blockdev write read block ...passed 00:13:49.158 Test: blockdev write zeroes read block ...passed 00:13:49.158 Test: blockdev write zeroes read no split ...passed 00:13:49.416 Test: blockdev write zeroes read split ...passed 00:13:49.416 Test: blockdev write zeroes read split partial ...passed 00:13:49.416 Test: blockdev reset ...passed 00:13:49.416 Test: blockdev write read 8 blocks ...passed 00:13:49.416 Test: blockdev write read size > 128k ...passed 00:13:49.416 Test: blockdev write read invalid size ...passed 00:13:49.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.416 Test: blockdev write read max offset ...passed 00:13:49.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.416 Test: blockdev writev readv 8 blocks ...passed 00:13:49.416 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.416 Test: blockdev writev readv block ...passed 00:13:49.416 Test: blockdev writev readv size > 128k ...passed 00:13:49.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.416 Test: blockdev comparev and writev ...passed 00:13:49.416 Test: blockdev nvme passthru rw ...passed 00:13:49.416 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.416 Test: blockdev nvme admin passthru ...passed 00:13:49.416 Test: blockdev copy ...passed 00:13:49.416 Suite: bdevio tests on: Malloc2p3 00:13:49.416 Test: blockdev write read block ...passed 00:13:49.416 Test: blockdev write zeroes read block ...passed 00:13:49.416 Test: blockdev write zeroes read no split ...passed 00:13:49.416 Test: blockdev write zeroes read split ...passed 00:13:49.416 Test: blockdev write zeroes read split partial ...passed 00:13:49.416 Test: blockdev reset ...passed 00:13:49.416 Test: blockdev write read 8 blocks ...passed 00:13:49.416 Test: blockdev write read size > 128k ...passed 00:13:49.416 Test: blockdev write read invalid size ...passed 00:13:49.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.416 Test: blockdev write read max offset ...passed 00:13:49.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.416 Test: blockdev writev readv 8 blocks ...passed 00:13:49.416 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.416 Test: blockdev writev readv block ...passed 00:13:49.416 Test: blockdev writev readv size > 128k ...passed 00:13:49.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.416 Test: blockdev comparev and writev ...passed 00:13:49.416 Test: blockdev nvme passthru rw ...passed 00:13:49.416 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.416 Test: blockdev nvme admin passthru ...passed 00:13:49.416 Test: blockdev copy ...passed 00:13:49.416 Suite: bdevio tests on: Malloc2p2 00:13:49.416 Test: blockdev write read block ...passed 00:13:49.416 Test: blockdev write zeroes read block ...passed 00:13:49.416 Test: blockdev write zeroes read no split ...passed 00:13:49.416 Test: blockdev write zeroes read split ...passed 00:13:49.416 Test: blockdev write zeroes read split partial ...passed 00:13:49.416 Test: blockdev reset ...passed 00:13:49.416 Test: blockdev write read 8 blocks ...passed 00:13:49.416 Test: blockdev write read size > 128k ...passed 00:13:49.416 Test: blockdev write read invalid size ...passed 00:13:49.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.416 Test: blockdev write read max offset ...passed 00:13:49.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.416 Test: blockdev writev readv 8 blocks ...passed 00:13:49.416 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.416 Test: blockdev writev readv block ...passed 00:13:49.416 Test: blockdev writev readv size > 128k ...passed 00:13:49.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.416 Test: blockdev comparev and writev ...passed 00:13:49.416 Test: blockdev nvme passthru rw ...passed 00:13:49.416 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.416 Test: blockdev nvme admin passthru ...passed 00:13:49.416 Test: blockdev copy ...passed 00:13:49.416 Suite: bdevio tests on: Malloc2p1 00:13:49.416 Test: blockdev write read block ...passed 00:13:49.416 Test: blockdev write zeroes read block ...passed 00:13:49.416 Test: blockdev write zeroes read no split ...passed 00:13:49.416 Test: blockdev write zeroes read split ...passed 00:13:49.416 Test: blockdev write zeroes read split partial ...passed 00:13:49.416 Test: blockdev reset ...passed 00:13:49.416 Test: blockdev write read 8 blocks ...passed 00:13:49.416 Test: blockdev write read size > 128k ...passed 00:13:49.416 Test: blockdev write read invalid size ...passed 00:13:49.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.416 Test: blockdev write read max offset ...passed 00:13:49.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.416 Test: blockdev writev readv 8 blocks ...passed 00:13:49.416 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.416 Test: blockdev writev readv block ...passed 00:13:49.416 Test: blockdev writev readv size > 128k ...passed 00:13:49.416 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.416 Test: blockdev comparev and writev ...passed 00:13:49.416 Test: blockdev nvme passthru rw ...passed 00:13:49.416 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.416 Test: blockdev nvme admin passthru ...passed 00:13:49.416 Test: blockdev copy ...passed 00:13:49.416 Suite: bdevio tests on: Malloc2p0 00:13:49.416 Test: blockdev write read block ...passed 00:13:49.416 Test: blockdev write zeroes read block ...passed 00:13:49.416 Test: blockdev write zeroes read no split ...passed 00:13:49.416 Test: blockdev write zeroes read split ...passed 00:13:49.416 Test: blockdev write zeroes read split partial ...passed 00:13:49.416 Test: blockdev reset ...passed 00:13:49.416 Test: blockdev write read 8 blocks ...passed 00:13:49.416 Test: blockdev write read size > 128k ...passed 00:13:49.416 Test: blockdev write read invalid size ...passed 00:13:49.416 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.416 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.416 Test: blockdev write read max offset ...passed 00:13:49.416 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.416 Test: blockdev writev readv 8 blocks ...passed 00:13:49.416 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.416 Test: blockdev writev readv block ...passed 00:13:49.416 Test: blockdev writev readv size > 128k ...passed 00:13:49.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.681 Test: blockdev comparev and writev ...passed 00:13:49.681 Test: blockdev nvme passthru rw ...passed 00:13:49.681 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.681 Test: blockdev nvme admin passthru ...passed 00:13:49.681 Test: blockdev copy ...passed 00:13:49.681 Suite: bdevio tests on: Malloc1p1 00:13:49.681 Test: blockdev write read block ...passed 00:13:49.681 Test: blockdev write zeroes read block ...passed 00:13:49.681 Test: blockdev write zeroes read no split ...passed 00:13:49.681 Test: blockdev write zeroes read split ...passed 00:13:49.681 Test: blockdev write zeroes read split partial ...passed 00:13:49.681 Test: blockdev reset ...passed 00:13:49.681 Test: blockdev write read 8 blocks ...passed 00:13:49.681 Test: blockdev write read size > 128k ...passed 00:13:49.681 Test: blockdev write read invalid size ...passed 00:13:49.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.681 Test: blockdev write read max offset ...passed 00:13:49.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.681 Test: blockdev writev readv 8 blocks ...passed 00:13:49.681 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.681 Test: blockdev writev readv block ...passed 00:13:49.681 Test: blockdev writev readv size > 128k ...passed 00:13:49.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.681 Test: blockdev comparev and writev ...passed 00:13:49.681 Test: blockdev nvme passthru rw ...passed 00:13:49.681 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.681 Test: blockdev nvme admin passthru ...passed 00:13:49.681 Test: blockdev copy ...passed 00:13:49.681 Suite: bdevio tests on: Malloc1p0 00:13:49.681 Test: blockdev write read block ...passed 00:13:49.681 Test: blockdev write zeroes read block ...passed 00:13:49.681 Test: blockdev write zeroes read no split ...passed 00:13:49.681 Test: blockdev write zeroes read split ...passed 00:13:49.681 Test: blockdev write zeroes read split partial ...passed 00:13:49.681 Test: blockdev reset ...passed 00:13:49.681 Test: blockdev write read 8 blocks ...passed 00:13:49.681 Test: blockdev write read size > 128k ...passed 00:13:49.681 Test: blockdev write read invalid size ...passed 00:13:49.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.681 Test: blockdev write read max offset ...passed 00:13:49.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.681 Test: blockdev writev readv 8 blocks ...passed 00:13:49.681 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.681 Test: blockdev writev readv block ...passed 00:13:49.681 Test: blockdev writev readv size > 128k ...passed 00:13:49.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.681 Test: blockdev comparev and writev ...passed 00:13:49.681 Test: blockdev nvme passthru rw ...passed 00:13:49.681 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.681 Test: blockdev nvme admin passthru ...passed 00:13:49.681 Test: blockdev copy ...passed 00:13:49.681 Suite: bdevio tests on: Malloc0 00:13:49.681 Test: blockdev write read block ...passed 00:13:49.681 Test: blockdev write zeroes read block ...passed 00:13:49.681 Test: blockdev write zeroes read no split ...passed 00:13:49.681 Test: blockdev write zeroes read split ...passed 00:13:49.681 Test: blockdev write zeroes read split partial ...passed 00:13:49.681 Test: blockdev reset ...passed 00:13:49.681 Test: blockdev write read 8 blocks ...passed 00:13:49.681 Test: blockdev write read size > 128k ...passed 00:13:49.681 Test: blockdev write read invalid size ...passed 00:13:49.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:49.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:49.681 Test: blockdev write read max offset ...passed 00:13:49.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:49.681 Test: blockdev writev readv 8 blocks ...passed 00:13:49.681 Test: blockdev writev readv 30 x 1block ...passed 00:13:49.681 Test: blockdev writev readv block ...passed 00:13:49.681 Test: blockdev writev readv size > 128k ...passed 00:13:49.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:49.681 Test: blockdev comparev and writev ...passed 00:13:49.681 Test: blockdev nvme passthru rw ...passed 00:13:49.681 Test: blockdev nvme passthru vendor specific ...passed 00:13:49.681 Test: blockdev nvme admin passthru ...passed 00:13:49.681 Test: blockdev copy ...passed 00:13:49.681 00:13:49.681 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.681 suites 16 16 n/a 0 0 00:13:49.681 tests 368 368 368 0 0 00:13:49.681 asserts 2224 2224 2224 0 n/a 00:13:49.681 00:13:49.681 Elapsed time = 2.722 seconds 00:13:49.681 0 00:13:49.681 12:57:53 -- bdev/blockdev.sh@295 -- # killprocess 115336 00:13:49.681 12:57:53 -- common/autotest_common.sh@924 -- # '[' -z 115336 ']' 00:13:49.681 12:57:53 -- common/autotest_common.sh@928 -- # kill -0 115336 00:13:49.681 12:57:53 -- common/autotest_common.sh@929 -- # uname 00:13:49.681 12:57:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:49.681 12:57:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115336 00:13:49.681 12:57:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:49.681 killing process with pid 115336 00:13:49.681 12:57:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:49.681 12:57:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115336' 00:13:49.681 12:57:53 -- common/autotest_common.sh@943 -- # kill 115336 00:13:49.681 12:57:53 -- common/autotest_common.sh@948 -- # wait 115336 00:13:51.581 ************************************ 00:13:51.581 END TEST bdev_bounds 00:13:51.581 ************************************ 00:13:51.581 12:57:55 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:51.581 00:13:51.581 real 0m4.447s 00:13:51.581 user 0m11.274s 00:13:51.581 sys 0m0.517s 00:13:51.581 12:57:55 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:13:51.581 12:57:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.581 12:57:55 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:51.581 12:57:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:51.581 12:57:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:51.581 12:57:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.581 ************************************ 00:13:51.581 START TEST bdev_nbd 00:13:51.581 ************************************ 00:13:51.581 12:57:55 -- common/autotest_common.sh@1099 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:51.581 12:57:55 -- bdev/blockdev.sh@300 -- # uname -s 00:13:51.581 12:57:55 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:51.581 12:57:55 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:51.581 12:57:55 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:51.581 12:57:55 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:13:51.581 12:57:55 -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:51.581 12:57:55 -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:13:51.581 12:57:55 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:51.581 12:57:55 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:13:51.581 12:57:55 -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:51.581 12:57:55 -- bdev/blockdev.sh@312 -- # bdev_num=16 00:13:51.581 12:57:55 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:13:51.581 12:57:55 -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:51.581 12:57:55 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:13:51.581 12:57:55 -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:51.581 12:57:55 -- bdev/blockdev.sh@318 -- # nbd_pid=115434 00:13:51.581 12:57:55 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:51.581 12:57:55 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:51.581 12:57:55 -- bdev/blockdev.sh@320 -- # waitforlisten 115434 /var/tmp/spdk-nbd.sock 00:13:51.581 12:57:55 -- common/autotest_common.sh@817 -- # '[' -z 115434 ']' 00:13:51.581 12:57:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:51.581 12:57:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.581 12:57:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:51.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:51.581 12:57:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.581 12:57:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.841 [2024-04-17 12:57:55.773082] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:13:51.841 [2024-04-17 12:57:55.773435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.841 [2024-04-17 12:57:55.937608] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.101 [2024-04-17 12:57:56.183278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.667 [2024-04-17 12:57:56.568095] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:52.667 [2024-04-17 12:57:56.568394] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:52.667 [2024-04-17 12:57:56.576045] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:52.667 [2024-04-17 12:57:56.576211] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:52.667 [2024-04-17 12:57:56.584055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:52.667 [2024-04-17 12:57:56.584209] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:52.667 [2024-04-17 12:57:56.584345] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:52.667 [2024-04-17 12:57:56.788875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:52.667 [2024-04-17 12:57:56.789226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.667 [2024-04-17 12:57:56.789383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:52.668 [2024-04-17 12:57:56.789519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.668 [2024-04-17 12:57:56.792396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.668 [2024-04-17 12:57:56.792585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:53.234 12:57:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:53.234 12:57:57 -- common/autotest_common.sh@850 -- # return 0 00:13:53.234 12:57:57 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@24 -- # local i 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.234 12:57:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:53.493 12:57:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:53.493 12:57:57 -- common/autotest_common.sh@855 -- # local i 00:13:53.493 12:57:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:53.493 12:57:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:53.493 12:57:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:53.493 12:57:57 -- common/autotest_common.sh@859 -- # break 00:13:53.493 12:57:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:53.493 12:57:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:53.493 12:57:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.493 1+0 records in 00:13:53.493 1+0 records out 00:13:53.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387271 s, 10.6 MB/s 00:13:53.493 12:57:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.493 12:57:57 -- common/autotest_common.sh@872 -- # size=4096 00:13:53.493 12:57:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.493 12:57:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:53.493 12:57:57 -- common/autotest_common.sh@875 -- # return 0 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.493 12:57:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:53.752 12:57:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:53.752 12:57:57 -- common/autotest_common.sh@855 -- # local i 00:13:53.752 12:57:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:53.752 12:57:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:53.752 12:57:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:53.752 12:57:57 -- common/autotest_common.sh@859 -- # break 00:13:53.752 12:57:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:53.752 12:57:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:53.752 12:57:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.752 1+0 records in 00:13:53.752 1+0 records out 00:13:53.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071446 s, 5.7 MB/s 00:13:53.752 12:57:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.752 12:57:57 -- common/autotest_common.sh@872 -- # size=4096 00:13:53.752 12:57:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.752 12:57:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:53.752 12:57:57 -- common/autotest_common.sh@875 -- # return 0 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.752 12:57:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:54.010 12:57:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:13:54.010 12:57:57 -- common/autotest_common.sh@855 -- # local i 00:13:54.010 12:57:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:54.010 12:57:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:54.010 12:57:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:13:54.010 12:57:57 -- common/autotest_common.sh@859 -- # break 00:13:54.010 12:57:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:54.010 12:57:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:54.010 12:57:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.010 1+0 records in 00:13:54.010 1+0 records out 00:13:54.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369972 s, 11.1 MB/s 00:13:54.010 12:57:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.010 12:57:57 -- common/autotest_common.sh@872 -- # size=4096 00:13:54.010 12:57:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.010 12:57:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:54.010 12:57:57 -- common/autotest_common.sh@875 -- # return 0 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.010 12:57:57 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:54.269 12:57:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:13:54.269 12:57:58 -- common/autotest_common.sh@855 -- # local i 00:13:54.269 12:57:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:54.269 12:57:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:54.269 12:57:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:13:54.269 12:57:58 -- common/autotest_common.sh@859 -- # break 00:13:54.269 12:57:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:54.269 12:57:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:54.269 12:57:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.269 1+0 records in 00:13:54.269 1+0 records out 00:13:54.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528041 s, 7.8 MB/s 00:13:54.269 12:57:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.269 12:57:58 -- common/autotest_common.sh@872 -- # size=4096 00:13:54.269 12:57:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.269 12:57:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:54.269 12:57:58 -- common/autotest_common.sh@875 -- # return 0 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.269 12:57:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:54.527 12:57:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:54.527 12:57:58 -- common/autotest_common.sh@855 -- # local i 00:13:54.527 12:57:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:54.527 12:57:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:54.527 12:57:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:54.527 12:57:58 -- common/autotest_common.sh@859 -- # break 00:13:54.527 12:57:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:54.527 12:57:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:54.527 12:57:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.527 1+0 records in 00:13:54.527 1+0 records out 00:13:54.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337186 s, 12.1 MB/s 00:13:54.527 12:57:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.527 12:57:58 -- common/autotest_common.sh@872 -- # size=4096 00:13:54.527 12:57:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.527 12:57:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:54.527 12:57:58 -- common/autotest_common.sh@875 -- # return 0 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:54.527 12:57:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:55.094 12:57:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:55.094 12:57:58 -- common/autotest_common.sh@855 -- # local i 00:13:55.094 12:57:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.094 12:57:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.094 12:57:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:55.094 12:57:58 -- common/autotest_common.sh@859 -- # break 00:13:55.094 12:57:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.094 12:57:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.094 12:57:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.094 1+0 records in 00:13:55.094 1+0 records out 00:13:55.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467907 s, 8.8 MB/s 00:13:55.094 12:57:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.094 12:57:58 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.094 12:57:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.094 12:57:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.094 12:57:58 -- common/autotest_common.sh@875 -- # return 0 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.094 12:57:58 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:55.352 12:57:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:13:55.352 12:57:59 -- common/autotest_common.sh@855 -- # local i 00:13:55.352 12:57:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.352 12:57:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.352 12:57:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:13:55.352 12:57:59 -- common/autotest_common.sh@859 -- # break 00:13:55.352 12:57:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.352 12:57:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.352 12:57:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.352 1+0 records in 00:13:55.352 1+0 records out 00:13:55.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363438 s, 11.3 MB/s 00:13:55.352 12:57:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.352 12:57:59 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.352 12:57:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.352 12:57:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.352 12:57:59 -- common/autotest_common.sh@875 -- # return 0 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.352 12:57:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:55.610 12:57:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:55.610 12:57:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:55.610 12:57:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:55.610 12:57:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:13:55.610 12:57:59 -- common/autotest_common.sh@855 -- # local i 00:13:55.610 12:57:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.610 12:57:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.611 12:57:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:13:55.611 12:57:59 -- common/autotest_common.sh@859 -- # break 00:13:55.611 12:57:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.611 12:57:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.611 12:57:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.611 1+0 records in 00:13:55.611 1+0 records out 00:13:55.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945793 s, 4.3 MB/s 00:13:55.611 12:57:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.611 12:57:59 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.611 12:57:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.611 12:57:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.611 12:57:59 -- common/autotest_common.sh@875 -- # return 0 00:13:55.611 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.611 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.611 12:57:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:55.868 12:57:59 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:13:55.868 12:57:59 -- common/autotest_common.sh@855 -- # local i 00:13:55.868 12:57:59 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.868 12:57:59 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.868 12:57:59 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:13:55.868 12:57:59 -- common/autotest_common.sh@859 -- # break 00:13:55.868 12:57:59 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.868 12:57:59 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.868 12:57:59 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.868 1+0 records in 00:13:55.868 1+0 records out 00:13:55.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619843 s, 6.6 MB/s 00:13:55.868 12:57:59 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.868 12:57:59 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.868 12:57:59 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.868 12:57:59 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.868 12:57:59 -- common/autotest_common.sh@875 -- # return 0 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:55.868 12:57:59 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:56.126 12:58:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:13:56.126 12:58:00 -- common/autotest_common.sh@855 -- # local i 00:13:56.126 12:58:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.126 12:58:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.126 12:58:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:13:56.126 12:58:00 -- common/autotest_common.sh@859 -- # break 00:13:56.126 12:58:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.126 12:58:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.126 12:58:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.126 1+0 records in 00:13:56.126 1+0 records out 00:13:56.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00204927 s, 2.0 MB/s 00:13:56.126 12:58:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.126 12:58:00 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.126 12:58:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.126 12:58:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.126 12:58:00 -- common/autotest_common.sh@875 -- # return 0 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.126 12:58:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:56.384 12:58:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:56.384 12:58:00 -- common/autotest_common.sh@855 -- # local i 00:13:56.384 12:58:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.384 12:58:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.384 12:58:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:56.384 12:58:00 -- common/autotest_common.sh@859 -- # break 00:13:56.384 12:58:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.384 12:58:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.384 12:58:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.384 1+0 records in 00:13:56.384 1+0 records out 00:13:56.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069285 s, 5.9 MB/s 00:13:56.384 12:58:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.384 12:58:00 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.384 12:58:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.384 12:58:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.384 12:58:00 -- common/autotest_common.sh@875 -- # return 0 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.384 12:58:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:56.643 12:58:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:56.643 12:58:00 -- common/autotest_common.sh@855 -- # local i 00:13:56.643 12:58:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.643 12:58:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.643 12:58:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:56.643 12:58:00 -- common/autotest_common.sh@859 -- # break 00:13:56.643 12:58:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.643 12:58:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.643 12:58:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.643 1+0 records in 00:13:56.643 1+0 records out 00:13:56.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575393 s, 7.1 MB/s 00:13:56.643 12:58:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.643 12:58:00 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.643 12:58:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.643 12:58:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.643 12:58:00 -- common/autotest_common.sh@875 -- # return 0 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.643 12:58:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:56.902 12:58:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:56.902 12:58:01 -- common/autotest_common.sh@855 -- # local i 00:13:56.902 12:58:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.902 12:58:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.902 12:58:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:56.902 12:58:01 -- common/autotest_common.sh@859 -- # break 00:13:56.902 12:58:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.902 12:58:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.902 12:58:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.902 1+0 records in 00:13:56.902 1+0 records out 00:13:56.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114539 s, 3.6 MB/s 00:13:56.902 12:58:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.902 12:58:01 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.902 12:58:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.902 12:58:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.902 12:58:01 -- common/autotest_common.sh@875 -- # return 0 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:56.902 12:58:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:57.470 12:58:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:57.470 12:58:01 -- common/autotest_common.sh@855 -- # local i 00:13:57.470 12:58:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:57.470 12:58:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:57.470 12:58:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:57.470 12:58:01 -- common/autotest_common.sh@859 -- # break 00:13:57.470 12:58:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:57.470 12:58:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:57.470 12:58:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.470 1+0 records in 00:13:57.470 1+0 records out 00:13:57.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800355 s, 5.1 MB/s 00:13:57.470 12:58:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.470 12:58:01 -- common/autotest_common.sh@872 -- # size=4096 00:13:57.470 12:58:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.470 12:58:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:57.470 12:58:01 -- common/autotest_common.sh@875 -- # return 0 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:57.470 12:58:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:57.729 12:58:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:13:57.729 12:58:01 -- common/autotest_common.sh@855 -- # local i 00:13:57.729 12:58:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:57.729 12:58:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:57.729 12:58:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:13:57.729 12:58:01 -- common/autotest_common.sh@859 -- # break 00:13:57.729 12:58:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:57.729 12:58:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:57.729 12:58:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.729 1+0 records in 00:13:57.729 1+0 records out 00:13:57.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570308 s, 7.2 MB/s 00:13:57.729 12:58:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.729 12:58:01 -- common/autotest_common.sh@872 -- # size=4096 00:13:57.729 12:58:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.729 12:58:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:57.729 12:58:01 -- common/autotest_common.sh@875 -- # return 0 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:57.729 12:58:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:57.988 12:58:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:13:57.988 12:58:01 -- common/autotest_common.sh@855 -- # local i 00:13:57.988 12:58:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:57.988 12:58:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:57.988 12:58:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:13:57.988 12:58:01 -- common/autotest_common.sh@859 -- # break 00:13:57.988 12:58:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:57.988 12:58:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:57.988 12:58:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.988 1+0 records in 00:13:57.988 1+0 records out 00:13:57.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124362 s, 3.3 MB/s 00:13:57.988 12:58:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.988 12:58:01 -- common/autotest_common.sh@872 -- # size=4096 00:13:57.988 12:58:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.988 12:58:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:57.988 12:58:01 -- common/autotest_common.sh@875 -- # return 0 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:57.988 12:58:01 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:58.247 12:58:02 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd0", 00:13:58.247 "bdev_name": "Malloc0" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd1", 00:13:58.247 "bdev_name": "Malloc1p0" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd2", 00:13:58.247 "bdev_name": "Malloc1p1" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd3", 00:13:58.247 "bdev_name": "Malloc2p0" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd4", 00:13:58.247 "bdev_name": "Malloc2p1" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd5", 00:13:58.247 "bdev_name": "Malloc2p2" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd6", 00:13:58.247 "bdev_name": "Malloc2p3" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd7", 00:13:58.247 "bdev_name": "Malloc2p4" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd8", 00:13:58.247 "bdev_name": "Malloc2p5" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd9", 00:13:58.247 "bdev_name": "Malloc2p6" 00:13:58.247 }, 00:13:58.247 { 00:13:58.247 "nbd_device": "/dev/nbd10", 00:13:58.247 "bdev_name": "Malloc2p7" 00:13:58.247 }, 00:13:58.247 { 00:13:58.248 "nbd_device": "/dev/nbd11", 00:13:58.248 "bdev_name": "TestPT" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd12", 00:13:58.248 "bdev_name": "raid0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd13", 00:13:58.248 "bdev_name": "concat0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd14", 00:13:58.248 "bdev_name": "raid1" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd15", 00:13:58.248 "bdev_name": "AIO0" 00:13:58.248 } 00:13:58.248 ]' 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd0", 00:13:58.248 "bdev_name": "Malloc0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd1", 00:13:58.248 "bdev_name": "Malloc1p0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd2", 00:13:58.248 "bdev_name": "Malloc1p1" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd3", 00:13:58.248 "bdev_name": "Malloc2p0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd4", 00:13:58.248 "bdev_name": "Malloc2p1" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd5", 00:13:58.248 "bdev_name": "Malloc2p2" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd6", 00:13:58.248 "bdev_name": "Malloc2p3" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd7", 00:13:58.248 "bdev_name": "Malloc2p4" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd8", 00:13:58.248 "bdev_name": "Malloc2p5" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd9", 00:13:58.248 "bdev_name": "Malloc2p6" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd10", 00:13:58.248 "bdev_name": "Malloc2p7" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd11", 00:13:58.248 "bdev_name": "TestPT" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd12", 00:13:58.248 "bdev_name": "raid0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd13", 00:13:58.248 "bdev_name": "concat0" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd14", 00:13:58.248 "bdev_name": "raid1" 00:13:58.248 }, 00:13:58.248 { 00:13:58.248 "nbd_device": "/dev/nbd15", 00:13:58.248 "bdev_name": "AIO0" 00:13:58.248 } 00:13:58.248 ]' 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@51 -- # local i 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.248 12:58:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@41 -- # break 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.506 12:58:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@41 -- # break 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.765 12:58:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@41 -- # break 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.024 12:58:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:59.282 12:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@41 -- # break 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.283 12:58:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@41 -- # break 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.541 12:58:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@41 -- # break 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.800 12:58:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@41 -- # break 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.059 12:58:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:00.327 12:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:00.327 12:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:00.327 12:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@41 -- # break 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.328 12:58:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@41 -- # break 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.587 12:58:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@41 -- # break 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.845 12:58:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:01.104 12:58:05 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@41 -- # break 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.363 12:58:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@41 -- # break 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.622 12:58:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@41 -- # break 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.880 12:58:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@41 -- # break 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.880 12:58:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@41 -- # break 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@41 -- # break 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.453 12:58:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:02.726 12:58:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:02.726 12:58:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:02.726 12:58:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@65 -- # true 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@65 -- # count=0 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@122 -- # count=0 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@127 -- # return 0 00:14:02.988 12:58:06 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:14:02.988 12:58:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@12 -- # local i 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.989 12:58:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:02.989 /dev/nbd0 00:14:03.247 12:58:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.247 12:58:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.247 12:58:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:03.247 12:58:07 -- common/autotest_common.sh@855 -- # local i 00:14:03.247 12:58:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:03.247 12:58:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:03.247 12:58:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:03.247 12:58:07 -- common/autotest_common.sh@859 -- # break 00:14:03.247 12:58:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:03.247 12:58:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:03.247 12:58:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.247 1+0 records in 00:14:03.247 1+0 records out 00:14:03.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182539 s, 22.4 MB/s 00:14:03.247 12:58:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.247 12:58:07 -- common/autotest_common.sh@872 -- # size=4096 00:14:03.247 12:58:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.247 12:58:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:03.247 12:58:07 -- common/autotest_common.sh@875 -- # return 0 00:14:03.247 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.247 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.247 12:58:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:14:03.506 /dev/nbd1 00:14:03.506 12:58:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.506 12:58:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.506 12:58:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:03.506 12:58:07 -- common/autotest_common.sh@855 -- # local i 00:14:03.506 12:58:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:03.506 12:58:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:03.506 12:58:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:03.506 12:58:07 -- common/autotest_common.sh@859 -- # break 00:14:03.506 12:58:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:03.506 12:58:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:03.506 12:58:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.506 1+0 records in 00:14:03.506 1+0 records out 00:14:03.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036047 s, 11.4 MB/s 00:14:03.506 12:58:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.506 12:58:07 -- common/autotest_common.sh@872 -- # size=4096 00:14:03.506 12:58:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.506 12:58:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:03.506 12:58:07 -- common/autotest_common.sh@875 -- # return 0 00:14:03.506 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.506 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.506 12:58:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:14:03.765 /dev/nbd10 00:14:03.765 12:58:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:03.765 12:58:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:03.765 12:58:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:14:03.765 12:58:07 -- common/autotest_common.sh@855 -- # local i 00:14:03.765 12:58:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:03.765 12:58:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:03.765 12:58:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:14:03.765 12:58:07 -- common/autotest_common.sh@859 -- # break 00:14:03.765 12:58:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:03.765 12:58:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:03.765 12:58:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.765 1+0 records in 00:14:03.765 1+0 records out 00:14:03.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320753 s, 12.8 MB/s 00:14:03.765 12:58:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.765 12:58:07 -- common/autotest_common.sh@872 -- # size=4096 00:14:03.765 12:58:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.765 12:58:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:03.765 12:58:07 -- common/autotest_common.sh@875 -- # return 0 00:14:03.765 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.765 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.765 12:58:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:14:04.024 /dev/nbd11 00:14:04.024 12:58:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:04.024 12:58:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:04.024 12:58:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:14:04.024 12:58:07 -- common/autotest_common.sh@855 -- # local i 00:14:04.024 12:58:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:04.024 12:58:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:04.024 12:58:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:14:04.024 12:58:07 -- common/autotest_common.sh@859 -- # break 00:14:04.024 12:58:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:04.024 12:58:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:04.024 12:58:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.024 1+0 records in 00:14:04.024 1+0 records out 00:14:04.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343322 s, 11.9 MB/s 00:14:04.024 12:58:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.024 12:58:07 -- common/autotest_common.sh@872 -- # size=4096 00:14:04.024 12:58:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.024 12:58:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:04.024 12:58:07 -- common/autotest_common.sh@875 -- # return 0 00:14:04.024 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.024 12:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.024 12:58:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:14:04.283 /dev/nbd12 00:14:04.283 12:58:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:04.283 12:58:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:04.283 12:58:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:14:04.283 12:58:08 -- common/autotest_common.sh@855 -- # local i 00:14:04.283 12:58:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:04.283 12:58:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:04.283 12:58:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:14:04.283 12:58:08 -- common/autotest_common.sh@859 -- # break 00:14:04.283 12:58:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:04.283 12:58:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:04.283 12:58:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.283 1+0 records in 00:14:04.283 1+0 records out 00:14:04.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358987 s, 11.4 MB/s 00:14:04.283 12:58:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.283 12:58:08 -- common/autotest_common.sh@872 -- # size=4096 00:14:04.283 12:58:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.283 12:58:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:04.283 12:58:08 -- common/autotest_common.sh@875 -- # return 0 00:14:04.283 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.283 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.283 12:58:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:14:04.541 /dev/nbd13 00:14:04.541 12:58:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:04.541 12:58:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:04.541 12:58:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:14:04.541 12:58:08 -- common/autotest_common.sh@855 -- # local i 00:14:04.541 12:58:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:04.541 12:58:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:04.541 12:58:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:14:04.541 12:58:08 -- common/autotest_common.sh@859 -- # break 00:14:04.541 12:58:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:04.541 12:58:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:04.541 12:58:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.541 1+0 records in 00:14:04.541 1+0 records out 00:14:04.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358444 s, 11.4 MB/s 00:14:04.541 12:58:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.541 12:58:08 -- common/autotest_common.sh@872 -- # size=4096 00:14:04.541 12:58:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.541 12:58:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:04.541 12:58:08 -- common/autotest_common.sh@875 -- # return 0 00:14:04.541 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.541 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.541 12:58:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:14:04.801 /dev/nbd14 00:14:04.801 12:58:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:04.801 12:58:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:04.801 12:58:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:14:04.801 12:58:08 -- common/autotest_common.sh@855 -- # local i 00:14:04.801 12:58:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:04.801 12:58:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:04.801 12:58:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:14:04.801 12:58:08 -- common/autotest_common.sh@859 -- # break 00:14:04.801 12:58:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:04.801 12:58:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:04.801 12:58:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.801 1+0 records in 00:14:04.801 1+0 records out 00:14:04.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334706 s, 12.2 MB/s 00:14:04.801 12:58:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.801 12:58:08 -- common/autotest_common.sh@872 -- # size=4096 00:14:04.801 12:58:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.801 12:58:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:04.801 12:58:08 -- common/autotest_common.sh@875 -- # return 0 00:14:04.801 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.801 12:58:08 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.801 12:58:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:14:05.060 /dev/nbd15 00:14:05.060 12:58:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:14:05.060 12:58:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:14:05.060 12:58:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:14:05.060 12:58:09 -- common/autotest_common.sh@855 -- # local i 00:14:05.060 12:58:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:05.060 12:58:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:05.060 12:58:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:14:05.060 12:58:09 -- common/autotest_common.sh@859 -- # break 00:14:05.060 12:58:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:05.060 12:58:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:05.060 12:58:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.060 1+0 records in 00:14:05.060 1+0 records out 00:14:05.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412498 s, 9.9 MB/s 00:14:05.060 12:58:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.060 12:58:09 -- common/autotest_common.sh@872 -- # size=4096 00:14:05.060 12:58:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.060 12:58:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:05.060 12:58:09 -- common/autotest_common.sh@875 -- # return 0 00:14:05.060 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.060 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.060 12:58:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:14:05.318 /dev/nbd2 00:14:05.318 12:58:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:05.318 12:58:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:05.318 12:58:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:14:05.318 12:58:09 -- common/autotest_common.sh@855 -- # local i 00:14:05.318 12:58:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:05.318 12:58:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:05.318 12:58:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:14:05.318 12:58:09 -- common/autotest_common.sh@859 -- # break 00:14:05.318 12:58:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:05.318 12:58:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:05.318 12:58:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.318 1+0 records in 00:14:05.318 1+0 records out 00:14:05.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004289 s, 9.6 MB/s 00:14:05.318 12:58:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.318 12:58:09 -- common/autotest_common.sh@872 -- # size=4096 00:14:05.318 12:58:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.318 12:58:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:05.318 12:58:09 -- common/autotest_common.sh@875 -- # return 0 00:14:05.318 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.318 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.318 12:58:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:14:05.577 /dev/nbd3 00:14:05.577 12:58:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:14:05.577 12:58:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:14:05.577 12:58:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:14:05.577 12:58:09 -- common/autotest_common.sh@855 -- # local i 00:14:05.577 12:58:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:05.577 12:58:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:05.577 12:58:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:14:05.577 12:58:09 -- common/autotest_common.sh@859 -- # break 00:14:05.577 12:58:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:05.577 12:58:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:05.577 12:58:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.577 1+0 records in 00:14:05.577 1+0 records out 00:14:05.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610681 s, 6.7 MB/s 00:14:05.577 12:58:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.577 12:58:09 -- common/autotest_common.sh@872 -- # size=4096 00:14:05.577 12:58:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.577 12:58:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:05.577 12:58:09 -- common/autotest_common.sh@875 -- # return 0 00:14:05.577 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.577 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:05.577 12:58:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:14:05.835 /dev/nbd4 00:14:05.835 12:58:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:14:06.092 12:58:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:14:06.092 12:58:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:14:06.092 12:58:09 -- common/autotest_common.sh@855 -- # local i 00:14:06.092 12:58:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:06.092 12:58:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:06.092 12:58:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:14:06.092 12:58:09 -- common/autotest_common.sh@859 -- # break 00:14:06.092 12:58:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:06.092 12:58:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:06.092 12:58:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.092 1+0 records in 00:14:06.092 1+0 records out 00:14:06.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596515 s, 6.9 MB/s 00:14:06.092 12:58:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.092 12:58:09 -- common/autotest_common.sh@872 -- # size=4096 00:14:06.092 12:58:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.092 12:58:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:06.092 12:58:09 -- common/autotest_common.sh@875 -- # return 0 00:14:06.092 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.092 12:58:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.092 12:58:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:14:06.092 /dev/nbd5 00:14:06.350 12:58:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:14:06.350 12:58:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:14:06.350 12:58:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:14:06.350 12:58:10 -- common/autotest_common.sh@855 -- # local i 00:14:06.350 12:58:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:06.350 12:58:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:06.350 12:58:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:14:06.350 12:58:10 -- common/autotest_common.sh@859 -- # break 00:14:06.350 12:58:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:06.350 12:58:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:06.350 12:58:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.350 1+0 records in 00:14:06.350 1+0 records out 00:14:06.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516091 s, 7.9 MB/s 00:14:06.350 12:58:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.350 12:58:10 -- common/autotest_common.sh@872 -- # size=4096 00:14:06.350 12:58:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.350 12:58:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:06.350 12:58:10 -- common/autotest_common.sh@875 -- # return 0 00:14:06.350 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.350 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.350 12:58:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:14:06.609 /dev/nbd6 00:14:06.609 12:58:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:14:06.609 12:58:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:14:06.609 12:58:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:14:06.609 12:58:10 -- common/autotest_common.sh@855 -- # local i 00:14:06.609 12:58:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:06.609 12:58:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:06.609 12:58:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:14:06.609 12:58:10 -- common/autotest_common.sh@859 -- # break 00:14:06.609 12:58:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:06.609 12:58:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:06.609 12:58:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.609 1+0 records in 00:14:06.609 1+0 records out 00:14:06.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704909 s, 5.8 MB/s 00:14:06.609 12:58:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.609 12:58:10 -- common/autotest_common.sh@872 -- # size=4096 00:14:06.609 12:58:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.609 12:58:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:06.609 12:58:10 -- common/autotest_common.sh@875 -- # return 0 00:14:06.609 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.609 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.609 12:58:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:14:06.868 /dev/nbd7 00:14:06.868 12:58:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:14:06.868 12:58:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:14:06.868 12:58:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:14:06.868 12:58:10 -- common/autotest_common.sh@855 -- # local i 00:14:06.868 12:58:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:06.868 12:58:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:06.868 12:58:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:14:06.868 12:58:10 -- common/autotest_common.sh@859 -- # break 00:14:06.868 12:58:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:06.868 12:58:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:06.868 12:58:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.868 1+0 records in 00:14:06.868 1+0 records out 00:14:06.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132593 s, 3.1 MB/s 00:14:06.868 12:58:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.868 12:58:10 -- common/autotest_common.sh@872 -- # size=4096 00:14:06.868 12:58:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.868 12:58:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:06.868 12:58:10 -- common/autotest_common.sh@875 -- # return 0 00:14:06.868 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.868 12:58:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:06.868 12:58:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:14:07.127 /dev/nbd8 00:14:07.127 12:58:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:14:07.127 12:58:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:14:07.127 12:58:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:14:07.127 12:58:11 -- common/autotest_common.sh@855 -- # local i 00:14:07.127 12:58:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:07.127 12:58:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:07.127 12:58:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:14:07.127 12:58:11 -- common/autotest_common.sh@859 -- # break 00:14:07.127 12:58:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:07.127 12:58:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:07.127 12:58:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.127 1+0 records in 00:14:07.127 1+0 records out 00:14:07.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790244 s, 5.2 MB/s 00:14:07.127 12:58:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.127 12:58:11 -- common/autotest_common.sh@872 -- # size=4096 00:14:07.128 12:58:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.128 12:58:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:07.128 12:58:11 -- common/autotest_common.sh@875 -- # return 0 00:14:07.128 12:58:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.128 12:58:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:07.128 12:58:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:14:07.387 /dev/nbd9 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:14:07.387 12:58:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:14:07.387 12:58:11 -- common/autotest_common.sh@855 -- # local i 00:14:07.387 12:58:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:07.387 12:58:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:07.387 12:58:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:14:07.387 12:58:11 -- common/autotest_common.sh@859 -- # break 00:14:07.387 12:58:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:07.387 12:58:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:07.387 12:58:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.387 1+0 records in 00:14:07.387 1+0 records out 00:14:07.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091824 s, 4.5 MB/s 00:14:07.387 12:58:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.387 12:58:11 -- common/autotest_common.sh@872 -- # size=4096 00:14:07.387 12:58:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.387 12:58:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:07.387 12:58:11 -- common/autotest_common.sh@875 -- # return 0 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.387 12:58:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:07.646 12:58:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd0", 00:14:07.646 "bdev_name": "Malloc0" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd1", 00:14:07.646 "bdev_name": "Malloc1p0" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd10", 00:14:07.646 "bdev_name": "Malloc1p1" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd11", 00:14:07.646 "bdev_name": "Malloc2p0" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd12", 00:14:07.646 "bdev_name": "Malloc2p1" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd13", 00:14:07.646 "bdev_name": "Malloc2p2" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd14", 00:14:07.646 "bdev_name": "Malloc2p3" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd15", 00:14:07.646 "bdev_name": "Malloc2p4" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd2", 00:14:07.646 "bdev_name": "Malloc2p5" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd3", 00:14:07.646 "bdev_name": "Malloc2p6" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd4", 00:14:07.646 "bdev_name": "Malloc2p7" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd5", 00:14:07.646 "bdev_name": "TestPT" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd6", 00:14:07.646 "bdev_name": "raid0" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd7", 00:14:07.646 "bdev_name": "concat0" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd8", 00:14:07.646 "bdev_name": "raid1" 00:14:07.646 }, 00:14:07.646 { 00:14:07.646 "nbd_device": "/dev/nbd9", 00:14:07.646 "bdev_name": "AIO0" 00:14:07.646 } 00:14:07.646 ]' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd0", 00:14:07.647 "bdev_name": "Malloc0" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd1", 00:14:07.647 "bdev_name": "Malloc1p0" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd10", 00:14:07.647 "bdev_name": "Malloc1p1" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd11", 00:14:07.647 "bdev_name": "Malloc2p0" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd12", 00:14:07.647 "bdev_name": "Malloc2p1" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd13", 00:14:07.647 "bdev_name": "Malloc2p2" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd14", 00:14:07.647 "bdev_name": "Malloc2p3" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd15", 00:14:07.647 "bdev_name": "Malloc2p4" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd2", 00:14:07.647 "bdev_name": "Malloc2p5" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd3", 00:14:07.647 "bdev_name": "Malloc2p6" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd4", 00:14:07.647 "bdev_name": "Malloc2p7" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd5", 00:14:07.647 "bdev_name": "TestPT" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd6", 00:14:07.647 "bdev_name": "raid0" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd7", 00:14:07.647 "bdev_name": "concat0" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd8", 00:14:07.647 "bdev_name": "raid1" 00:14:07.647 }, 00:14:07.647 { 00:14:07.647 "nbd_device": "/dev/nbd9", 00:14:07.647 "bdev_name": "AIO0" 00:14:07.647 } 00:14:07.647 ]' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:07.647 /dev/nbd1 00:14:07.647 /dev/nbd10 00:14:07.647 /dev/nbd11 00:14:07.647 /dev/nbd12 00:14:07.647 /dev/nbd13 00:14:07.647 /dev/nbd14 00:14:07.647 /dev/nbd15 00:14:07.647 /dev/nbd2 00:14:07.647 /dev/nbd3 00:14:07.647 /dev/nbd4 00:14:07.647 /dev/nbd5 00:14:07.647 /dev/nbd6 00:14:07.647 /dev/nbd7 00:14:07.647 /dev/nbd8 00:14:07.647 /dev/nbd9' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:07.647 /dev/nbd1 00:14:07.647 /dev/nbd10 00:14:07.647 /dev/nbd11 00:14:07.647 /dev/nbd12 00:14:07.647 /dev/nbd13 00:14:07.647 /dev/nbd14 00:14:07.647 /dev/nbd15 00:14:07.647 /dev/nbd2 00:14:07.647 /dev/nbd3 00:14:07.647 /dev/nbd4 00:14:07.647 /dev/nbd5 00:14:07.647 /dev/nbd6 00:14:07.647 /dev/nbd7 00:14:07.647 /dev/nbd8 00:14:07.647 /dev/nbd9' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@65 -- # count=16 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@66 -- # echo 16 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@95 -- # count=16 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:07.647 256+0 records in 00:14:07.647 256+0 records out 00:14:07.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574934 s, 182 MB/s 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.647 12:58:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:07.905 256+0 records in 00:14:07.905 256+0 records out 00:14:07.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161197 s, 6.5 MB/s 00:14:07.906 12:58:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.906 12:58:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:07.906 256+0 records in 00:14:07.906 256+0 records out 00:14:07.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151455 s, 6.9 MB/s 00:14:07.906 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.906 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:08.219 256+0 records in 00:14:08.219 256+0 records out 00:14:08.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152915 s, 6.9 MB/s 00:14:08.219 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.219 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:08.219 256+0 records in 00:14:08.219 256+0 records out 00:14:08.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148761 s, 7.0 MB/s 00:14:08.219 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.219 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:08.477 256+0 records in 00:14:08.477 256+0 records out 00:14:08.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150849 s, 7.0 MB/s 00:14:08.477 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.477 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:08.735 256+0 records in 00:14:08.735 256+0 records out 00:14:08.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148116 s, 7.1 MB/s 00:14:08.735 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.736 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:08.736 256+0 records in 00:14:08.736 256+0 records out 00:14:08.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148888 s, 7.0 MB/s 00:14:08.736 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.736 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:14:08.994 256+0 records in 00:14:08.994 256+0 records out 00:14:08.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148338 s, 7.1 MB/s 00:14:08.994 12:58:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.994 12:58:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:14:08.994 256+0 records in 00:14:08.994 256+0 records out 00:14:08.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149508 s, 7.0 MB/s 00:14:08.994 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.994 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:14:09.252 256+0 records in 00:14:09.252 256+0 records out 00:14:09.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148425 s, 7.1 MB/s 00:14:09.252 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.252 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:14:09.511 256+0 records in 00:14:09.511 256+0 records out 00:14:09.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148448 s, 7.1 MB/s 00:14:09.511 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.511 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:14:09.511 256+0 records in 00:14:09.511 256+0 records out 00:14:09.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148648 s, 7.1 MB/s 00:14:09.511 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.511 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:14:09.769 256+0 records in 00:14:09.769 256+0 records out 00:14:09.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149857 s, 7.0 MB/s 00:14:09.769 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.769 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:14:09.769 256+0 records in 00:14:09.769 256+0 records out 00:14:09.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150969 s, 6.9 MB/s 00:14:09.769 12:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:09.769 12:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:14:10.028 256+0 records in 00:14:10.028 256+0 records out 00:14:10.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153806 s, 6.8 MB/s 00:14:10.028 12:58:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:10.028 12:58:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:14:10.300 256+0 records in 00:14:10.300 256+0 records out 00:14:10.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22486 s, 4.7 MB/s 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:10.300 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@51 -- # local i 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.301 12:58:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@41 -- # break 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:10.868 12:58:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:10.868 12:58:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@41 -- # break 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.869 12:58:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.435 12:58:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:11.436 12:58:15 -- bdev/nbd_common.sh@41 -- # break 00:14:11.436 12:58:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.436 12:58:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.436 12:58:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.693 12:58:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@41 -- # break 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:11.694 12:58:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:11.951 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@41 -- # break 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.952 12:58:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@41 -- # break 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.210 12:58:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@41 -- # break 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.469 12:58:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@41 -- # break 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.727 12:58:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.985 12:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@41 -- # break 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@41 -- # break 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.243 12:58:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@41 -- # break 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:13.809 12:58:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@41 -- # break 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.067 12:58:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@41 -- # break 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.325 12:58:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@41 -- # break 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.583 12:58:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@41 -- # break 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.859 12:58:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@41 -- # break 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.148 12:58:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@65 -- # true 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@65 -- # count=0 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@104 -- # count=0 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@109 -- # return 0 00:14:15.413 12:58:19 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:15.413 12:58:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:15.673 malloc_lvol_verify 00:14:15.673 12:58:19 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:15.932 4617a29a-d894-4d60-8ddd-f0926036b4fa 00:14:15.932 12:58:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:16.191 104b0789-f09a-42c0-8499-0f70fe4fbab8 00:14:16.191 12:58:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:16.450 /dev/nbd0 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:16.451 mke2fs 1.45.5 (07-Jan-2020) 00:14:16.451 00:14:16.451 Filesystem too small for a journal 00:14:16.451 Creating filesystem with 1024 4k blocks and 1024 inodes 00:14:16.451 00:14:16.451 Allocating group tables: 0/1 done 00:14:16.451 Writing inode tables: 0/1 done 00:14:16.451 Writing superblocks and filesystem accounting information: 0/1 done 00:14:16.451 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@51 -- # local i 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:16.451 12:58:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@41 -- # break 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@45 -- # return 0 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:16.710 12:58:20 -- bdev/nbd_common.sh@147 -- # return 0 00:14:16.710 12:58:20 -- bdev/blockdev.sh@326 -- # killprocess 115434 00:14:16.710 12:58:20 -- common/autotest_common.sh@924 -- # '[' -z 115434 ']' 00:14:16.710 12:58:20 -- common/autotest_common.sh@928 -- # kill -0 115434 00:14:16.710 12:58:20 -- common/autotest_common.sh@929 -- # uname 00:14:16.710 12:58:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:16.710 12:58:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115434 00:14:16.710 killing process with pid 115434 00:14:16.710 12:58:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:16.710 12:58:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:16.710 12:58:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115434' 00:14:16.710 12:58:20 -- common/autotest_common.sh@943 -- # kill 115434 00:14:16.710 12:58:20 -- common/autotest_common.sh@948 -- # wait 115434 00:14:19.248 ************************************ 00:14:19.248 END TEST bdev_nbd 00:14:19.248 ************************************ 00:14:19.248 12:58:22 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:19.248 00:14:19.248 real 0m27.236s 00:14:19.248 user 0m37.758s 00:14:19.248 sys 0m8.956s 00:14:19.248 12:58:22 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:14:19.248 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:19.248 12:58:22 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:19.248 12:58:22 -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:14:19.248 12:58:22 -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:14:19.248 12:58:22 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:19.248 12:58:22 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:14:19.248 12:58:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:19.248 12:58:22 -- common/autotest_common.sh@10 -- # set +x 00:14:19.248 ************************************ 00:14:19.248 START TEST bdev_fio 00:14:19.248 ************************************ 00:14:19.248 12:58:23 -- common/autotest_common.sh@1099 -- # fio_test_suite '' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@331 -- # local env_context 00:14:19.248 12:58:23 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:19.248 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:19.248 12:58:23 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:19.248 12:58:23 -- bdev/blockdev.sh@339 -- # echo '' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:19.248 12:58:23 -- bdev/blockdev.sh@339 -- # env_context= 00:14:19.248 12:58:23 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1254 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:19.248 12:58:23 -- common/autotest_common.sh@1255 -- # local workload=verify 00:14:19.248 12:58:23 -- common/autotest_common.sh@1256 -- # local bdev_type=AIO 00:14:19.248 12:58:23 -- common/autotest_common.sh@1257 -- # local env_context= 00:14:19.248 12:58:23 -- common/autotest_common.sh@1258 -- # local fio_dir=/usr/src/fio 00:14:19.248 12:58:23 -- common/autotest_common.sh@1260 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1265 -- # '[' -z verify ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1269 -- # '[' -n '' ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1273 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:19.248 12:58:23 -- common/autotest_common.sh@1275 -- # cat 00:14:19.248 12:58:23 -- common/autotest_common.sh@1287 -- # '[' verify == verify ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1288 -- # cat 00:14:19.248 12:58:23 -- common/autotest_common.sh@1297 -- # '[' AIO == AIO ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1298 -- # /usr/src/fio/fio --version 00:14:19.248 12:58:23 -- common/autotest_common.sh@1298 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:19.248 12:58:23 -- common/autotest_common.sh@1299 -- # echo serialize_overlap=1 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:14:19.248 12:58:23 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:19.248 12:58:23 -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:14:19.248 12:58:23 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:19.248 12:58:23 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.248 12:58:23 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:14:19.248 12:58:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:19.248 12:58:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.248 ************************************ 00:14:19.248 START TEST bdev_fio_rw_verify 00:14:19.248 ************************************ 00:14:19.249 12:58:23 -- common/autotest_common.sh@1099 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.249 12:58:23 -- common/autotest_common.sh@1330 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.249 12:58:23 -- common/autotest_common.sh@1311 -- # local fio_dir=/usr/src/fio 00:14:19.249 12:58:23 -- common/autotest_common.sh@1313 -- # sanitizers=(libasan libclang_rt.asan) 00:14:19.249 12:58:23 -- common/autotest_common.sh@1313 -- # local sanitizers 00:14:19.249 12:58:23 -- common/autotest_common.sh@1314 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.249 12:58:23 -- common/autotest_common.sh@1315 -- # shift 00:14:19.249 12:58:23 -- common/autotest_common.sh@1317 -- # local asan_lib= 00:14:19.249 12:58:23 -- common/autotest_common.sh@1318 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.249 12:58:23 -- common/autotest_common.sh@1319 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.249 12:58:23 -- common/autotest_common.sh@1319 -- # awk '{print $3}' 00:14:19.249 12:58:23 -- common/autotest_common.sh@1319 -- # grep libasan 00:14:19.249 12:58:23 -- common/autotest_common.sh@1319 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:14:19.249 12:58:23 -- common/autotest_common.sh@1320 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:14:19.249 12:58:23 -- common/autotest_common.sh@1321 -- # break 00:14:19.249 12:58:23 -- common/autotest_common.sh@1326 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.249 12:58:23 -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:19.249 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:19.249 fio-3.35 00:14:19.249 Starting 16 threads 00:14:31.450 00:14:31.450 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=116693: Wed Apr 17 12:58:34 2024 00:14:31.450 read: IOPS=74.4k, BW=291MiB/s (305MB/s)(2908MiB/10004msec) 00:14:31.450 slat (usec): min=2, max=35997, avg=35.23, stdev=420.12 00:14:31.450 clat (usec): min=9, max=40233, avg=285.17, stdev=1254.16 00:14:31.450 lat (usec): min=23, max=40255, avg=320.39, stdev=1322.41 00:14:31.450 clat percentiles (usec): 00:14:31.450 | 50.000th=[ 165], 99.000th=[ 898], 99.900th=[16319], 99.990th=[28181], 00:14:31.450 | 99.999th=[40109] 00:14:31.450 write: IOPS=120k, BW=471MiB/s (494MB/s)(4669MiB/9921msec); 0 zone resets 00:14:31.450 slat (usec): min=5, max=39666, avg=66.86, stdev=619.19 00:14:31.450 clat (usec): min=7, max=40327, avg=385.58, stdev=1491.98 00:14:31.450 lat (usec): min=35, max=40343, avg=452.44, stdev=1615.18 00:14:31.450 clat percentiles (usec): 00:14:31.450 | 50.000th=[ 219], 99.000th=[ 5735], 99.900th=[18482], 99.990th=[28443], 00:14:31.450 | 99.999th=[39584] 00:14:31.450 bw ( KiB/s): min=284712, max=798848, per=98.92%, avg=476748.00, stdev=9057.77, samples=304 00:14:31.450 iops : min=71178, max=199712, avg=119186.95, stdev=2264.45, samples=304 00:14:31.450 lat (usec) : 10=0.01%, 20=0.01%, 50=0.74%, 100=13.28%, 250=55.54% 00:14:31.450 lat (usec) : 500=25.90%, 750=2.63%, 1000=0.51% 00:14:31.450 lat (msec) : 2=0.39%, 4=0.08%, 10=0.19%, 20=0.66%, 50=0.07% 00:14:31.450 cpu : usr=57.63%, sys=1.92%, ctx=220951, majf=0, minf=82898 00:14:31.450 IO depths : 1=11.7%, 2=24.3%, 4=51.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.450 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.450 issued rwts: total=744458,1195377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:31.450 00:14:31.450 Run status group 0 (all jobs): 00:14:31.450 READ: bw=291MiB/s (305MB/s), 291MiB/s-291MiB/s (305MB/s-305MB/s), io=2908MiB (3049MB), run=10004-10004msec 00:14:31.450 WRITE: bw=471MiB/s (494MB/s), 471MiB/s-471MiB/s (494MB/s-494MB/s), io=4669MiB (4896MB), run=9921-9921msec 00:14:33.356 ----------------------------------------------------- 00:14:33.356 Suppressions used: 00:14:33.356 count bytes template 00:14:33.356 16 140 /usr/src/fio/parse.c 00:14:33.356 8815 846240 /usr/src/fio/iolog.c 00:14:33.356 2 596 libcrypto.so 00:14:33.356 ----------------------------------------------------- 00:14:33.356 00:14:33.356 ************************************ 00:14:33.356 END TEST bdev_fio_rw_verify 00:14:33.356 ************************************ 00:14:33.356 00:14:33.356 real 0m14.121s 00:14:33.356 user 1m37.865s 00:14:33.356 sys 0m4.092s 00:14:33.356 12:58:37 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:14:33.356 12:58:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.356 12:58:37 -- bdev/blockdev.sh@350 -- # rm -f 00:14:33.356 12:58:37 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.356 12:58:37 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1254 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.356 12:58:37 -- common/autotest_common.sh@1255 -- # local workload=trim 00:14:33.356 12:58:37 -- common/autotest_common.sh@1256 -- # local bdev_type= 00:14:33.356 12:58:37 -- common/autotest_common.sh@1257 -- # local env_context= 00:14:33.356 12:58:37 -- common/autotest_common.sh@1258 -- # local fio_dir=/usr/src/fio 00:14:33.356 12:58:37 -- common/autotest_common.sh@1260 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1265 -- # '[' -z trim ']' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1269 -- # '[' -n '' ']' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1273 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.356 12:58:37 -- common/autotest_common.sh@1275 -- # cat 00:14:33.356 12:58:37 -- common/autotest_common.sh@1287 -- # '[' trim == verify ']' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1302 -- # '[' trim == trim ']' 00:14:33.356 12:58:37 -- common/autotest_common.sh@1303 -- # echo rw=trimwrite 00:14:33.356 12:58:37 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:33.357 12:58:37 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "172861c7-2cfb-4f81-843a-e0174a981298"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "172861c7-2cfb-4f81-843a-e0174a981298",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f290bd1f-3f1f-5361-86f1-421ed8577b0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f290bd1f-3f1f-5361-86f1-421ed8577b0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "631def9a-23a7-5617-a4ad-6071227a31e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "631def9a-23a7-5617-a4ad-6071227a31e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b3e8b928-22cc-543b-ae6d-8057e7876dd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b3e8b928-22cc-543b-ae6d-8057e7876dd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "013aaae4-377d-5bf2-be0d-a8df1b703934"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "013aaae4-377d-5bf2-be0d-a8df1b703934",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "3d4fbc76-a2d0-57e5-99a4-5434610318a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4fbc76-a2d0-57e5-99a4-5434610318a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7107ab96-53f0-58a0-97e2-a2c50680801b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7107ab96-53f0-58a0-97e2-a2c50680801b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "eb38b391-d210-5c22-bc4d-81c7917acdad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eb38b391-d210-5c22-bc4d-81c7917acdad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "30a79626-f094-5da0-8bc1-15fa8534bfb4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30a79626-f094-5da0-8bc1-15fa8534bfb4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dc69e74f-dfec-5ec9-89a4-14b69076f90c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc69e74f-dfec-5ec9-89a4-14b69076f90c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4de16a82-d902-5fab-af6d-573aa968cf4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4de16a82-d902-5fab-af6d-573aa968cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f85dcb15-d609-416f-9980-df4c67b4a35c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4f9c30e7-88f7-443a-b60c-ebe471bae7ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "92c44a26-3125-4263-81c6-f18ccc4e5db1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7d9bd8ba-05ef-4022-af71-4f926ca49ead"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f3d5379f-4c53-4641-9929-3d2781c7bf68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dc204392-880e-4c3b-9839-0ad929a399b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc5b5885-de6b-4a19-b661-e987b8f1b2f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af0eadb5-75f4-4cf9-be21-de2f06b74068",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "51e42b2b-902d-45d3-af14-027a236fbaaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "245fb0f9-6786-4615-932c-3e54fd1b217a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "245fb0f9-6786-4615-932c-3e54fd1b217a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:33.357 12:58:37 -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:14:33.357 Malloc1p0 00:14:33.357 Malloc1p1 00:14:33.357 Malloc2p0 00:14:33.357 Malloc2p1 00:14:33.357 Malloc2p2 00:14:33.357 Malloc2p3 00:14:33.357 Malloc2p4 00:14:33.357 Malloc2p5 00:14:33.357 Malloc2p6 00:14:33.357 Malloc2p7 00:14:33.357 TestPT 00:14:33.357 raid0 00:14:33.357 concat0 ]] 00:14:33.357 12:58:37 -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "172861c7-2cfb-4f81-843a-e0174a981298"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "172861c7-2cfb-4f81-843a-e0174a981298",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f290bd1f-3f1f-5361-86f1-421ed8577b0d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f290bd1f-3f1f-5361-86f1-421ed8577b0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "631def9a-23a7-5617-a4ad-6071227a31e6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "631def9a-23a7-5617-a4ad-6071227a31e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b3e8b928-22cc-543b-ae6d-8057e7876dd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b3e8b928-22cc-543b-ae6d-8057e7876dd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "013aaae4-377d-5bf2-be0d-a8df1b703934"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "013aaae4-377d-5bf2-be0d-a8df1b703934",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7e5ec4b0-6114-5f6e-8fdc-2d6e7749044b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "3d4fbc76-a2d0-57e5-99a4-5434610318a1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4fbc76-a2d0-57e5-99a4-5434610318a1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7107ab96-53f0-58a0-97e2-a2c50680801b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7107ab96-53f0-58a0-97e2-a2c50680801b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "eb38b391-d210-5c22-bc4d-81c7917acdad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "eb38b391-d210-5c22-bc4d-81c7917acdad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "30a79626-f094-5da0-8bc1-15fa8534bfb4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "30a79626-f094-5da0-8bc1-15fa8534bfb4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dc69e74f-dfec-5ec9-89a4-14b69076f90c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dc69e74f-dfec-5ec9-89a4-14b69076f90c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4de16a82-d902-5fab-af6d-573aa968cf4b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4de16a82-d902-5fab-af6d-573aa968cf4b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f85dcb15-d609-416f-9980-df4c67b4a35c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f85dcb15-d609-416f-9980-df4c67b4a35c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "4f9c30e7-88f7-443a-b60c-ebe471bae7ce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "92c44a26-3125-4263-81c6-f18ccc4e5db1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "7d9bd8ba-05ef-4022-af71-4f926ca49ead"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d9bd8ba-05ef-4022-af71-4f926ca49ead",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f3d5379f-4c53-4641-9929-3d2781c7bf68",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "dc204392-880e-4c3b-9839-0ad929a399b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fc5b5885-de6b-4a19-b661-e987b8f1b2f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc5b5885-de6b-4a19-b661-e987b8f1b2f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af0eadb5-75f4-4cf9-be21-de2f06b74068",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "51e42b2b-902d-45d3-af14-027a236fbaaa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "245fb0f9-6786-4615-932c-3e54fd1b217a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "245fb0f9-6786-4615-932c-3e54fd1b217a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:14:33.358 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.358 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:14:33.358 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:14:33.359 12:58:37 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.359 12:58:37 -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:14:33.359 12:58:37 -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:14:33.359 12:58:37 -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.359 12:58:37 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:14:33.359 12:58:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:33.359 12:58:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.359 ************************************ 00:14:33.359 START TEST bdev_fio_trim 00:14:33.359 ************************************ 00:14:33.359 12:58:37 -- common/autotest_common.sh@1099 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.359 12:58:37 -- common/autotest_common.sh@1330 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.359 12:58:37 -- common/autotest_common.sh@1311 -- # local fio_dir=/usr/src/fio 00:14:33.359 12:58:37 -- common/autotest_common.sh@1313 -- # sanitizers=(libasan libclang_rt.asan) 00:14:33.359 12:58:37 -- common/autotest_common.sh@1313 -- # local sanitizers 00:14:33.359 12:58:37 -- common/autotest_common.sh@1314 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.359 12:58:37 -- common/autotest_common.sh@1315 -- # shift 00:14:33.359 12:58:37 -- common/autotest_common.sh@1317 -- # local asan_lib= 00:14:33.359 12:58:37 -- common/autotest_common.sh@1318 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.359 12:58:37 -- common/autotest_common.sh@1319 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.359 12:58:37 -- common/autotest_common.sh@1319 -- # grep libasan 00:14:33.359 12:58:37 -- common/autotest_common.sh@1319 -- # awk '{print $3}' 00:14:33.359 12:58:37 -- common/autotest_common.sh@1319 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:14:33.359 12:58:37 -- common/autotest_common.sh@1320 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:14:33.359 12:58:37 -- common/autotest_common.sh@1321 -- # break 00:14:33.359 12:58:37 -- common/autotest_common.sh@1326 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.359 12:58:37 -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.646 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.646 fio-3.35 00:14:33.646 Starting 14 threads 00:14:45.850 00:14:45.850 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=116930: Wed Apr 17 12:58:49 2024 00:14:45.850 write: IOPS=157k, BW=613MiB/s (643MB/s)(6130MiB/10002msec); 0 zone resets 00:14:45.850 slat (usec): min=2, max=42682, avg=31.64, stdev=373.02 00:14:45.850 clat (usec): min=25, max=28361, avg=231.09, stdev=1071.80 00:14:45.850 lat (usec): min=35, max=42909, avg=262.74, stdev=1134.57 00:14:45.850 clat percentiles (usec): 00:14:45.850 | 50.000th=[ 149], 99.000th=[ 494], 99.900th=[16319], 99.990th=[21627], 00:14:45.850 | 99.999th=[28181] 00:14:45.850 bw ( KiB/s): min=397184, max=932592, per=99.46%, avg=624229.74, stdev=11772.38, samples=266 00:14:45.850 iops : min=99296, max=233148, avg=156057.42, stdev=2943.09, samples=266 00:14:45.850 trim: IOPS=157k, BW=613MiB/s (643MB/s)(6130MiB/10002msec); 0 zone resets 00:14:45.850 slat (usec): min=4, max=28519, avg=21.46, stdev=299.59 00:14:45.850 clat (usec): min=4, max=42909, avg=241.00, stdev=1036.18 00:14:45.850 lat (usec): min=13, max=42924, avg=262.46, stdev=1078.63 00:14:45.850 clat percentiles (usec): 00:14:45.850 | 50.000th=[ 167], 99.000th=[ 375], 99.900th=[16188], 99.990th=[22938], 00:14:45.850 | 99.999th=[28181] 00:14:45.850 bw ( KiB/s): min=397184, max=932592, per=99.46%, avg=624229.74, stdev=11772.46, samples=266 00:14:45.850 iops : min=99296, max=233148, avg=156057.42, stdev=2943.11, samples=266 00:14:45.850 lat (usec) : 10=0.08%, 20=0.21%, 50=0.83%, 100=14.09%, 250=75.40% 00:14:45.850 lat (usec) : 500=8.65%, 750=0.24%, 1000=0.01% 00:14:45.850 lat (msec) : 2=0.01%, 4=0.01%, 10=0.04%, 20=0.41%, 50=0.03% 00:14:45.850 cpu : usr=68.75%, sys=0.58%, ctx=175216, majf=0, minf=726 00:14:45.850 IO depths : 1=12.4%, 2=24.7%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.850 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.850 issued rwts: total=0,1569285,1569290,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.850 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:45.850 00:14:45.850 Run status group 0 (all jobs): 00:14:45.850 WRITE: bw=613MiB/s (643MB/s), 613MiB/s-613MiB/s (643MB/s-643MB/s), io=6130MiB (6428MB), run=10002-10002msec 00:14:45.850 TRIM: bw=613MiB/s (643MB/s), 613MiB/s-613MiB/s (643MB/s-643MB/s), io=6130MiB (6428MB), run=10002-10002msec 00:14:47.225 ----------------------------------------------------- 00:14:47.225 Suppressions used: 00:14:47.225 count bytes template 00:14:47.225 14 129 /usr/src/fio/parse.c 00:14:47.225 2 596 libcrypto.so 00:14:47.225 ----------------------------------------------------- 00:14:47.225 00:14:47.225 ************************************ 00:14:47.225 END TEST bdev_fio_trim 00:14:47.225 ************************************ 00:14:47.225 00:14:47.225 real 0m13.848s 00:14:47.225 user 1m41.114s 00:14:47.225 sys 0m1.760s 00:14:47.225 12:58:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:14:47.225 12:58:51 -- common/autotest_common.sh@10 -- # set +x 00:14:47.225 12:58:51 -- bdev/blockdev.sh@368 -- # rm -f 00:14:47.225 12:58:51 -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:47.225 12:58:51 -- bdev/blockdev.sh@370 -- # popd 00:14:47.225 /home/vagrant/spdk_repo/spdk 00:14:47.225 12:58:51 -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:14:47.225 00:14:47.225 real 0m28.336s 00:14:47.225 user 3m19.197s 00:14:47.225 sys 0m5.980s 00:14:47.225 12:58:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:14:47.225 12:58:51 -- common/autotest_common.sh@10 -- # set +x 00:14:47.225 ************************************ 00:14:47.225 END TEST bdev_fio 00:14:47.225 ************************************ 00:14:47.484 12:58:51 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:47.484 12:58:51 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:47.484 12:58:51 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:14:47.484 12:58:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:47.484 12:58:51 -- common/autotest_common.sh@10 -- # set +x 00:14:47.484 ************************************ 00:14:47.484 START TEST bdev_verify 00:14:47.484 ************************************ 00:14:47.484 12:58:51 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:47.484 [2024-04-17 12:58:51.505243] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:14:47.484 [2024-04-17 12:58:51.505690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117160 ] 00:14:47.743 [2024-04-17 12:58:51.686117] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:48.064 [2024-04-17 12:58:51.939947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.064 [2024-04-17 12:58:51.939954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.065 [2024-04-17 12:58:51.990689] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:14:48.324 [2024-04-17 12:58:52.320368] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:48.324 [2024-04-17 12:58:52.320702] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:48.324 [2024-04-17 12:58:52.328312] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:48.324 [2024-04-17 12:58:52.328508] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:48.324 [2024-04-17 12:58:52.336341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:48.324 [2024-04-17 12:58:52.336527] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:48.324 [2024-04-17 12:58:52.336677] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:48.583 [2024-04-17 12:58:52.528610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:48.583 [2024-04-17 12:58:52.529031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.583 [2024-04-17 12:58:52.529133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:48.583 [2024-04-17 12:58:52.529400] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.583 [2024-04-17 12:58:52.532324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.583 [2024-04-17 12:58:52.532523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:48.842 [2024-04-17 12:58:52.848952] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:14:48.842 Running I/O for 5 seconds... 00:14:54.110 00:14:54.110 Latency(us) 00:14:54.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.110 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x1000 00:14:54.110 Malloc0 : 5.15 1292.61 5.05 0.00 0.00 98885.34 625.57 216387.96 00:14:54.110 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x1000 length 0x1000 00:14:54.110 Malloc0 : 5.17 1287.10 5.03 0.00 0.00 99287.91 647.91 346983.33 00:14:54.110 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x800 00:14:54.110 Malloc1p0 : 5.20 665.05 2.60 0.00 0.00 191761.27 3083.17 212574.95 00:14:54.110 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x800 length 0x800 00:14:54.110 Malloc1p0 : 5.17 667.99 2.61 0.00 0.00 190890.01 3053.38 200182.69 00:14:54.110 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x800 00:14:54.110 Malloc1p1 : 5.20 664.78 2.60 0.00 0.00 191355.19 3083.17 207808.70 00:14:54.110 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x800 length 0x800 00:14:54.110 Malloc1p1 : 5.18 667.70 2.61 0.00 0.00 190491.72 3053.38 198276.19 00:14:54.110 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x200 00:14:54.110 Malloc2p0 : 5.20 664.51 2.60 0.00 0.00 190951.56 3217.22 207808.70 00:14:54.110 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x200 length 0x200 00:14:54.110 Malloc2p0 : 5.18 667.41 2.61 0.00 0.00 190096.47 3157.64 194463.19 00:14:54.110 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x200 00:14:54.110 Malloc2p1 : 5.20 664.25 2.59 0.00 0.00 190571.11 3217.22 203995.69 00:14:54.110 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x200 length 0x200 00:14:54.110 Malloc2p1 : 5.18 667.14 2.61 0.00 0.00 189723.07 3187.43 193509.93 00:14:54.110 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x200 00:14:54.110 Malloc2p2 : 5.20 663.98 2.59 0.00 0.00 190194.35 3217.22 200182.69 00:14:54.110 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x200 length 0x200 00:14:54.110 Malloc2p2 : 5.18 666.87 2.60 0.00 0.00 189342.91 3172.54 190650.18 00:14:54.110 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x200 00:14:54.110 Malloc2p3 : 5.21 663.72 2.59 0.00 0.00 189785.94 3232.12 194463.19 00:14:54.110 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x200 length 0x200 00:14:54.110 Malloc2p3 : 5.18 666.60 2.60 0.00 0.00 188935.45 3217.22 185883.93 00:14:54.110 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x0 length 0x200 00:14:54.110 Malloc2p4 : 5.21 663.46 2.59 0.00 0.00 189366.87 3291.69 190650.18 00:14:54.110 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.110 Verification LBA range: start 0x200 length 0x200 00:14:54.110 Malloc2p4 : 5.19 666.32 2.60 0.00 0.00 188517.38 3321.48 180164.42 00:14:54.369 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x200 00:14:54.369 Malloc2p5 : 5.21 663.19 2.59 0.00 0.00 188949.62 3306.59 186837.18 00:14:54.369 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x200 length 0x200 00:14:54.369 Malloc2p5 : 5.19 666.04 2.60 0.00 0.00 188107.79 3247.01 175398.17 00:14:54.369 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x200 00:14:54.369 Malloc2p6 : 5.21 662.92 2.59 0.00 0.00 188529.94 3261.91 182070.92 00:14:54.369 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x200 length 0x200 00:14:54.369 Malloc2p6 : 5.19 665.76 2.60 0.00 0.00 187699.82 3217.22 172538.41 00:14:54.369 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x200 00:14:54.369 Malloc2p7 : 5.22 662.66 2.59 0.00 0.00 188113.99 3247.01 178257.92 00:14:54.369 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x200 length 0x200 00:14:54.369 Malloc2p7 : 5.19 665.48 2.60 0.00 0.00 187290.42 3232.12 166818.91 00:14:54.369 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x1000 00:14:54.369 TestPT : 5.24 659.77 2.58 0.00 0.00 188309.43 18111.77 180164.42 00:14:54.369 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x1000 length 0x1000 00:14:54.369 TestPT : 5.22 639.93 2.50 0.00 0.00 193353.57 18230.92 244032.23 00:14:54.369 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x2000 00:14:54.369 raid0 : 5.22 662.11 2.59 0.00 0.00 186991.59 3321.48 156333.15 00:14:54.369 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x2000 length 0x2000 00:14:54.369 raid0 : 5.23 684.85 2.68 0.00 0.00 180832.50 3366.17 143940.89 00:14:54.369 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x2000 00:14:54.369 concat0 : 5.22 661.71 2.58 0.00 0.00 186616.26 3351.27 151566.89 00:14:54.369 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x2000 length 0x2000 00:14:54.369 concat0 : 5.24 684.59 2.67 0.00 0.00 180426.99 3366.17 138221.38 00:14:54.369 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x1000 00:14:54.369 raid1 : 5.25 683.27 2.67 0.00 0.00 180267.47 3038.49 144894.14 00:14:54.369 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x1000 length 0x1000 00:14:54.369 raid1 : 5.24 684.32 2.67 0.00 0.00 179993.05 3872.58 138221.38 00:14:54.369 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x0 length 0x4e2 00:14:54.369 AIO0 : 5.25 682.69 2.67 0.00 0.00 179928.15 2129.92 143940.89 00:14:54.369 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:54.369 Verification LBA range: start 0x4e2 length 0x4e2 00:14:54.369 AIO0 : 5.24 683.93 2.67 0.00 0.00 179591.04 2159.71 144894.14 00:14:54.369 =================================================================================================================== 00:14:54.369 Total : 22612.72 88.33 0.00 0.00 177500.24 625.57 346983.33 00:14:56.322 00:14:56.322 real 0m8.978s 00:14:56.322 user 0m16.250s 00:14:56.322 sys 0m0.533s 00:14:56.322 12:59:00 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:14:56.322 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:56.322 ************************************ 00:14:56.322 END TEST bdev_verify 00:14:56.322 ************************************ 00:14:56.322 12:59:00 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:56.322 12:59:00 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:14:56.322 12:59:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:56.322 12:59:00 -- common/autotest_common.sh@10 -- # set +x 00:14:56.580 ************************************ 00:14:56.580 START TEST bdev_verify_big_io 00:14:56.580 ************************************ 00:14:56.580 12:59:00 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:56.580 [2024-04-17 12:59:00.533671] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:14:56.580 [2024-04-17 12:59:00.534138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117315 ] 00:14:56.580 [2024-04-17 12:59:00.704744] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.839 [2024-04-17 12:59:00.916753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.839 [2024-04-17 12:59:00.916758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.839 [2024-04-17 12:59:00.968545] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:14:57.407 [2024-04-17 12:59:01.317320] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:57.407 [2024-04-17 12:59:01.317664] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:57.407 [2024-04-17 12:59:01.325297] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:57.407 [2024-04-17 12:59:01.325585] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:57.407 [2024-04-17 12:59:01.333320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:57.407 [2024-04-17 12:59:01.333564] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:57.407 [2024-04-17 12:59:01.333757] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:57.407 [2024-04-17 12:59:01.524034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:57.407 [2024-04-17 12:59:01.524398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.407 [2024-04-17 12:59:01.524547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:57.407 [2024-04-17 12:59:01.524680] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.407 [2024-04-17 12:59:01.527619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.407 [2024-04-17 12:59:01.527875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:57.975 [2024-04-17 12:59:01.840849] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:14:57.975 [2024-04-17 12:59:01.886056] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.889732] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.893813] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.897724] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.901135] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.904976] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.908355] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.912253] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.915629] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.919510] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.922859] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.926818] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.930207] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.934115] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.938081] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:01.941434] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:57.975 [2024-04-17 12:59:02.022816] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:57.975 [2024-04-17 12:59:02.029562] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:57.975 Running I/O for 5 seconds... 00:15:04.538 00:15:04.538 Latency(us) 00:15:04.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.538 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x100 00:15:04.538 Malloc0 : 5.80 220.56 13.79 0.00 0.00 571636.74 781.96 1685347.61 00:15:04.538 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x100 length 0x100 00:15:04.538 Malloc0 : 5.67 225.91 14.12 0.00 0.00 557735.43 767.07 1906501.82 00:15:04.538 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x80 00:15:04.538 Malloc1p0 : 5.96 122.05 7.63 0.00 0.00 990564.72 3098.07 1998013.91 00:15:04.538 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x80 length 0x80 00:15:04.538 Malloc1p0 : 6.39 47.55 2.97 0.00 0.00 2461411.99 1824.58 3858759.68 00:15:04.538 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x80 00:15:04.538 Malloc1p1 : 6.27 45.94 2.87 0.00 0.00 2521155.58 1519.24 4240060.04 00:15:04.538 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x80 length 0x80 00:15:04.538 Malloc1p1 : 6.39 47.54 2.97 0.00 0.00 2398733.28 1482.01 3721491.55 00:15:04.538 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p0 : 5.91 32.51 2.03 0.00 0.00 898082.74 726.11 1448941.38 00:15:04.538 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p0 : 5.93 37.77 2.36 0.00 0.00 775163.07 700.04 1227787.17 00:15:04.538 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p1 : 5.91 32.50 2.03 0.00 0.00 891980.27 741.00 1433689.37 00:15:04.538 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p1 : 5.93 37.77 2.36 0.00 0.00 769954.02 737.28 1212535.16 00:15:04.538 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p2 : 5.91 32.49 2.03 0.00 0.00 886293.69 767.07 1418437.35 00:15:04.538 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p2 : 5.93 37.76 2.36 0.00 0.00 764685.84 718.66 1189657.13 00:15:04.538 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p3 : 5.91 32.49 2.03 0.00 0.00 880389.61 763.35 1395559.33 00:15:04.538 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p3 : 5.93 37.75 2.36 0.00 0.00 759428.61 748.45 1174405.12 00:15:04.538 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p4 : 5.91 32.48 2.03 0.00 0.00 874402.57 767.07 1380307.32 00:15:04.538 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p4 : 5.94 37.74 2.36 0.00 0.00 754379.03 714.94 1159153.11 00:15:04.538 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p5 : 5.91 32.47 2.03 0.00 0.00 868812.79 726.11 1365055.30 00:15:04.538 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p5 : 5.94 37.73 2.36 0.00 0.00 749354.96 714.94 1136275.08 00:15:04.538 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p6 : 5.91 32.46 2.03 0.00 0.00 862927.10 741.00 1342177.28 00:15:04.538 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p6 : 5.94 37.72 2.36 0.00 0.00 744286.31 700.04 1121023.07 00:15:04.538 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x20 00:15:04.538 Malloc2p7 : 5.97 34.86 2.18 0.00 0.00 802407.50 729.83 1326925.27 00:15:04.538 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x20 length 0x20 00:15:04.538 Malloc2p7 : 5.94 37.71 2.36 0.00 0.00 739154.33 714.94 1105771.05 00:15:04.538 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x100 00:15:04.538 TestPT : 6.32 45.90 2.87 0.00 0.00 2339463.45 66727.56 3660483.49 00:15:04.538 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x100 length 0x100 00:15:04.538 TestPT : 6.40 44.97 2.81 0.00 0.00 2377230.69 70540.57 3324939.17 00:15:04.538 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x200 00:15:04.538 raid0 : 6.35 50.37 3.15 0.00 0.00 2083820.14 1571.37 3813003.64 00:15:04.538 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x200 length 0x200 00:15:04.538 raid0 : 6.41 54.92 3.43 0.00 0.00 1936004.28 1616.06 3309687.16 00:15:04.538 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.538 Verification LBA range: start 0x0 length 0x200 00:15:04.539 concat0 : 6.27 61.21 3.83 0.00 0.00 1698576.22 1571.37 3690987.52 00:15:04.539 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.539 Verification LBA range: start 0x200 length 0x200 00:15:04.539 concat0 : 6.35 70.87 4.43 0.00 0.00 1467499.41 1593.72 3172419.03 00:15:04.539 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:04.539 Verification LBA range: start 0x0 length 0x100 00:15:04.539 raid1 : 6.32 65.80 4.11 0.00 0.00 1553641.87 1980.97 3568971.40 00:15:04.539 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:04.539 Verification LBA range: start 0x100 length 0x100 00:15:04.539 raid1 : 6.40 77.48 4.84 0.00 0.00 1318049.35 1951.19 3050402.91 00:15:04.539 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:15:04.539 Verification LBA range: start 0x0 length 0x4e 00:15:04.539 AIO0 : 6.40 78.48 4.91 0.00 0.00 780232.10 1102.20 2104778.01 00:15:04.539 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:15:04.539 Verification LBA range: start 0x4e length 0x4e 00:15:04.539 AIO0 : 6.41 68.20 4.26 0.00 0.00 895605.62 1653.29 1738729.66 00:15:04.539 =================================================================================================================== 00:15:04.539 Total : 1891.97 118.25 0.00 0.00 1150156.97 700.04 4240060.04 00:15:07.086 ************************************ 00:15:07.086 END TEST bdev_verify_big_io 00:15:07.086 ************************************ 00:15:07.087 00:15:07.087 real 0m10.477s 00:15:07.087 user 0m19.361s 00:15:07.087 sys 0m0.504s 00:15:07.087 12:59:10 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:07.087 12:59:10 -- common/autotest_common.sh@10 -- # set +x 00:15:07.087 12:59:10 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:07.087 12:59:10 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:15:07.087 12:59:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:07.087 12:59:10 -- common/autotest_common.sh@10 -- # set +x 00:15:07.087 ************************************ 00:15:07.087 START TEST bdev_write_zeroes 00:15:07.087 ************************************ 00:15:07.087 12:59:11 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:07.087 [2024-04-17 12:59:11.077151] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:07.087 [2024-04-17 12:59:11.077578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117478 ] 00:15:07.345 [2024-04-17 12:59:11.240181] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.345 [2024-04-17 12:59:11.457251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.604 [2024-04-17 12:59:11.507447] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:07.862 [2024-04-17 12:59:11.838399] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:07.862 [2024-04-17 12:59:11.838672] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:07.862 [2024-04-17 12:59:11.846372] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:07.862 [2024-04-17 12:59:11.846560] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:07.862 [2024-04-17 12:59:11.854394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:07.862 [2024-04-17 12:59:11.854590] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:07.862 [2024-04-17 12:59:11.854738] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:08.124 [2024-04-17 12:59:12.051511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:08.124 [2024-04-17 12:59:12.051881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.124 [2024-04-17 12:59:12.052043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:08.124 [2024-04-17 12:59:12.052160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.124 [2024-04-17 12:59:12.054771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.124 [2024-04-17 12:59:12.054960] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:08.383 [2024-04-17 12:59:12.380721] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:15:08.383 Running I/O for 1 seconds... 00:15:09.760 00:15:09.760 Latency(us) 00:15:09.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.760 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc0 : 1.05 4765.96 18.62 0.00 0.00 26848.56 711.21 44564.48 00:15:09.760 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc1p0 : 1.05 4759.04 18.59 0.00 0.00 26835.33 997.93 43611.23 00:15:09.760 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc1p1 : 1.05 4752.33 18.56 0.00 0.00 26808.59 930.91 42657.98 00:15:09.760 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p0 : 1.05 4745.73 18.54 0.00 0.00 26780.47 1020.28 41704.73 00:15:09.760 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p1 : 1.05 4738.37 18.51 0.00 0.00 26764.47 983.04 40751.48 00:15:09.760 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p2 : 1.05 4731.93 18.48 0.00 0.00 26745.07 990.49 39798.23 00:15:09.760 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p3 : 1.06 4724.95 18.46 0.00 0.00 26723.64 945.80 39083.29 00:15:09.760 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p4 : 1.06 4718.41 18.43 0.00 0.00 26708.74 930.91 38130.04 00:15:09.760 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p5 : 1.06 4712.01 18.41 0.00 0.00 26690.71 953.25 37415.10 00:15:09.760 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p6 : 1.06 4705.68 18.38 0.00 0.00 26666.92 960.70 36461.85 00:15:09.760 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 Malloc2p7 : 1.06 4699.14 18.36 0.00 0.00 26644.10 990.49 35508.60 00:15:09.760 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 TestPT : 1.06 4692.77 18.33 0.00 0.00 26622.02 934.63 34793.66 00:15:09.760 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 raid0 : 1.07 4685.09 18.30 0.00 0.00 26591.31 1794.79 33125.47 00:15:09.760 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 concat0 : 1.07 4677.73 18.27 0.00 0.00 26524.50 1854.37 31218.97 00:15:09.760 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 raid1 : 1.07 4668.87 18.24 0.00 0.00 26451.05 2815.07 28478.37 00:15:09.760 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:09.760 AIO0 : 1.07 4657.80 18.19 0.00 0.00 26376.53 1511.80 28597.53 00:15:09.760 =================================================================================================================== 00:15:09.760 Total : 75435.80 294.67 0.00 0.00 26673.90 711.21 44564.48 00:15:11.663 ************************************ 00:15:11.663 END TEST bdev_write_zeroes 00:15:11.663 ************************************ 00:15:11.663 00:15:11.663 real 0m4.618s 00:15:11.663 user 0m4.018s 00:15:11.663 sys 0m0.404s 00:15:11.663 12:59:15 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:11.663 12:59:15 -- common/autotest_common.sh@10 -- # set +x 00:15:11.663 12:59:15 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.663 12:59:15 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:15:11.663 12:59:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:11.663 12:59:15 -- common/autotest_common.sh@10 -- # set +x 00:15:11.663 ************************************ 00:15:11.663 START TEST bdev_json_nonenclosed 00:15:11.663 ************************************ 00:15:11.663 12:59:15 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:11.663 [2024-04-17 12:59:15.782662] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:11.664 [2024-04-17 12:59:15.784032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117563 ] 00:15:11.922 [2024-04-17 12:59:15.976320] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.181 [2024-04-17 12:59:16.182630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.181 [2024-04-17 12:59:16.182923] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:12.181 [2024-04-17 12:59:16.183111] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:12.181 [2024-04-17 12:59:16.183241] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:12.440 ************************************ 00:15:12.440 END TEST bdev_json_nonenclosed 00:15:12.440 ************************************ 00:15:12.440 00:15:12.440 real 0m0.845s 00:15:12.440 user 0m0.614s 00:15:12.440 sys 0m0.129s 00:15:12.440 12:59:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:12.440 12:59:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.699 12:59:16 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.699 12:59:16 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:15:12.699 12:59:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:12.699 12:59:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.699 ************************************ 00:15:12.699 START TEST bdev_json_nonarray 00:15:12.699 ************************************ 00:15:12.699 12:59:16 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:12.699 [2024-04-17 12:59:16.722251] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:12.699 [2024-04-17 12:59:16.722752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117605 ] 00:15:12.958 [2024-04-17 12:59:16.894444] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.958 [2024-04-17 12:59:17.100043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.958 [2024-04-17 12:59:17.100380] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:12.958 [2024-04-17 12:59:17.100523] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:12.958 [2024-04-17 12:59:17.100578] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:13.525 ************************************ 00:15:13.525 END TEST bdev_json_nonarray 00:15:13.525 ************************************ 00:15:13.525 00:15:13.525 real 0m0.850s 00:15:13.525 user 0m0.601s 00:15:13.525 sys 0m0.148s 00:15:13.525 12:59:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:13.525 12:59:17 -- common/autotest_common.sh@10 -- # set +x 00:15:13.525 12:59:17 -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:15:13.525 12:59:17 -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:15:13.525 12:59:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:13.525 12:59:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:13.525 12:59:17 -- common/autotest_common.sh@10 -- # set +x 00:15:13.525 ************************************ 00:15:13.525 START TEST bdev_qos 00:15:13.525 ************************************ 00:15:13.525 12:59:17 -- common/autotest_common.sh@1099 -- # qos_test_suite '' 00:15:13.525 12:59:17 -- bdev/blockdev.sh@446 -- # QOS_PID=117647 00:15:13.525 12:59:17 -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:13.525 Process qos testing pid: 117647 00:15:13.525 12:59:17 -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 117647' 00:15:13.525 12:59:17 -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:13.525 12:59:17 -- bdev/blockdev.sh@449 -- # waitforlisten 117647 00:15:13.525 12:59:17 -- common/autotest_common.sh@817 -- # '[' -z 117647 ']' 00:15:13.525 12:59:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.525 12:59:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.525 12:59:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.525 12:59:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.525 12:59:17 -- common/autotest_common.sh@10 -- # set +x 00:15:13.525 [2024-04-17 12:59:17.651546] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:13.525 [2024-04-17 12:59:17.651968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117647 ] 00:15:13.799 [2024-04-17 12:59:17.814121] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.057 [2024-04-17 12:59:18.068109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.624 12:59:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.624 12:59:18 -- common/autotest_common.sh@850 -- # return 0 00:15:14.624 12:59:18 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:14.624 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.624 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.882 Malloc_0 00:15:14.882 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.882 12:59:18 -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:15:14.883 12:59:18 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:15:14.883 12:59:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:14.883 12:59:18 -- common/autotest_common.sh@887 -- # local i 00:15:14.883 12:59:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:14.883 12:59:18 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:14.883 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.883 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:14.883 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.883 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 [ 00:15:14.883 { 00:15:14.883 "name": "Malloc_0", 00:15:14.883 "aliases": [ 00:15:14.883 "6c27425a-f22a-49d2-ac7f-9ae306278d2c" 00:15:14.883 ], 00:15:14.883 "product_name": "Malloc disk", 00:15:14.883 "block_size": 512, 00:15:14.883 "num_blocks": 262144, 00:15:14.883 "uuid": "6c27425a-f22a-49d2-ac7f-9ae306278d2c", 00:15:14.883 "assigned_rate_limits": { 00:15:14.883 "rw_ios_per_sec": 0, 00:15:14.883 "rw_mbytes_per_sec": 0, 00:15:14.883 "r_mbytes_per_sec": 0, 00:15:14.883 "w_mbytes_per_sec": 0 00:15:14.883 }, 00:15:14.883 "claimed": false, 00:15:14.883 "zoned": false, 00:15:14.883 "supported_io_types": { 00:15:14.883 "read": true, 00:15:14.883 "write": true, 00:15:14.883 "unmap": true, 00:15:14.883 "write_zeroes": true, 00:15:14.883 "flush": true, 00:15:14.883 "reset": true, 00:15:14.883 "compare": false, 00:15:14.883 "compare_and_write": false, 00:15:14.883 "abort": true, 00:15:14.883 "nvme_admin": false, 00:15:14.883 "nvme_io": false 00:15:14.883 }, 00:15:14.883 "memory_domains": [ 00:15:14.883 { 00:15:14.883 "dma_device_id": "system", 00:15:14.883 "dma_device_type": 1 00:15:14.883 }, 00:15:14.883 { 00:15:14.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.883 "dma_device_type": 2 00:15:14.883 } 00:15:14.883 ], 00:15:14.883 "driver_specific": {} 00:15:14.883 } 00:15:14.883 ] 00:15:14.883 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@893 -- # return 0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:14.883 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.883 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 Null_1 00:15:14.883 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.883 12:59:18 -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:15:14.883 12:59:18 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:15:14.883 12:59:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:14.883 12:59:18 -- common/autotest_common.sh@887 -- # local i 00:15:14.883 12:59:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:14.883 12:59:18 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:14.883 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.883 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:14.883 12:59:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.883 12:59:18 -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 [ 00:15:14.883 { 00:15:14.883 "name": "Null_1", 00:15:14.883 "aliases": [ 00:15:14.883 "b40e6bc2-63b6-47e6-b8e7-c2a959fff44d" 00:15:14.883 ], 00:15:14.883 "product_name": "Null disk", 00:15:14.883 "block_size": 512, 00:15:14.883 "num_blocks": 262144, 00:15:14.883 "uuid": "b40e6bc2-63b6-47e6-b8e7-c2a959fff44d", 00:15:14.883 "assigned_rate_limits": { 00:15:14.883 "rw_ios_per_sec": 0, 00:15:14.883 "rw_mbytes_per_sec": 0, 00:15:14.883 "r_mbytes_per_sec": 0, 00:15:14.883 "w_mbytes_per_sec": 0 00:15:14.883 }, 00:15:14.883 "claimed": false, 00:15:14.883 "zoned": false, 00:15:14.883 "supported_io_types": { 00:15:14.883 "read": true, 00:15:14.883 "write": true, 00:15:14.883 "unmap": false, 00:15:14.883 "write_zeroes": true, 00:15:14.883 "flush": false, 00:15:14.883 "reset": true, 00:15:14.883 "compare": false, 00:15:14.883 "compare_and_write": false, 00:15:14.883 "abort": true, 00:15:14.883 "nvme_admin": false, 00:15:14.883 "nvme_io": false 00:15:14.883 }, 00:15:14.883 "driver_specific": {} 00:15:14.883 } 00:15:14.883 ] 00:15:14.883 12:59:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.883 12:59:18 -- common/autotest_common.sh@893 -- # return 0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@457 -- # qos_function_test 00:15:14.883 12:59:18 -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.883 12:59:18 -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:15:14.883 12:59:18 -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:15:14.883 12:59:18 -- bdev/blockdev.sh@412 -- # local io_result=0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:14.883 12:59:18 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:14.883 12:59:18 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:14.883 12:59:18 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:14.883 12:59:18 -- bdev/blockdev.sh@378 -- # tail -1 00:15:14.883 Running I/O for 60 seconds... 00:15:20.151 12:59:23 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 65926.85 263707.41 0.00 0.00 266240.00 0.00 0.00 ' 00:15:20.151 12:59:23 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:20.151 12:59:23 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:20.151 12:59:23 -- bdev/blockdev.sh@380 -- # iostat_result=65926.85 00:15:20.151 12:59:23 -- bdev/blockdev.sh@385 -- # echo 65926 00:15:20.151 12:59:23 -- bdev/blockdev.sh@416 -- # io_result=65926 00:15:20.151 12:59:23 -- bdev/blockdev.sh@418 -- # iops_limit=16000 00:15:20.151 12:59:23 -- bdev/blockdev.sh@419 -- # '[' 16000 -gt 1000 ']' 00:15:20.151 12:59:23 -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:15:20.151 12:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.151 12:59:23 -- common/autotest_common.sh@10 -- # set +x 00:15:20.151 12:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.151 12:59:23 -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:15:20.151 12:59:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:20.151 12:59:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:20.151 12:59:24 -- common/autotest_common.sh@10 -- # set +x 00:15:20.151 ************************************ 00:15:20.151 START TEST bdev_qos_iops 00:15:20.151 ************************************ 00:15:20.151 12:59:24 -- common/autotest_common.sh@1099 -- # run_qos_test 16000 IOPS Malloc_0 00:15:20.151 12:59:24 -- bdev/blockdev.sh@389 -- # local qos_limit=16000 00:15:20.151 12:59:24 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:20.151 12:59:24 -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:15:20.151 12:59:24 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:20.151 12:59:24 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:20.151 12:59:24 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:20.151 12:59:24 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:20.151 12:59:24 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:20.151 12:59:24 -- bdev/blockdev.sh@378 -- # tail -1 00:15:25.421 12:59:29 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 15995.86 63983.42 0.00 0.00 64832.00 0.00 0.00 ' 00:15:25.421 12:59:29 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:25.421 12:59:29 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:25.421 12:59:29 -- bdev/blockdev.sh@380 -- # iostat_result=15995.86 00:15:25.421 12:59:29 -- bdev/blockdev.sh@385 -- # echo 15995 00:15:25.421 ************************************ 00:15:25.421 END TEST bdev_qos_iops 00:15:25.421 ************************************ 00:15:25.421 12:59:29 -- bdev/blockdev.sh@392 -- # qos_result=15995 00:15:25.421 12:59:29 -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:15:25.421 12:59:29 -- bdev/blockdev.sh@396 -- # lower_limit=14400 00:15:25.421 12:59:29 -- bdev/blockdev.sh@397 -- # upper_limit=17600 00:15:25.421 12:59:29 -- bdev/blockdev.sh@400 -- # '[' 15995 -lt 14400 ']' 00:15:25.421 12:59:29 -- bdev/blockdev.sh@400 -- # '[' 15995 -gt 17600 ']' 00:15:25.421 00:15:25.421 real 0m5.203s 00:15:25.421 user 0m0.110s 00:15:25.421 sys 0m0.019s 00:15:25.421 12:59:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:25.421 12:59:29 -- common/autotest_common.sh@10 -- # set +x 00:15:25.421 12:59:29 -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:15:25.421 12:59:29 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:25.421 12:59:29 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:25.421 12:59:29 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:25.421 12:59:29 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:25.421 12:59:29 -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:25.421 12:59:29 -- bdev/blockdev.sh@378 -- # tail -1 00:15:30.694 12:59:34 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 22987.32 91949.27 0.00 0.00 93184.00 0.00 0.00 ' 00:15:30.694 12:59:34 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:30.694 12:59:34 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:30.694 12:59:34 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:30.694 12:59:34 -- bdev/blockdev.sh@382 -- # iostat_result=93184.00 00:15:30.694 12:59:34 -- bdev/blockdev.sh@385 -- # echo 93184 00:15:30.694 12:59:34 -- bdev/blockdev.sh@427 -- # bw_limit=93184 00:15:30.694 12:59:34 -- bdev/blockdev.sh@428 -- # bw_limit=9 00:15:30.694 12:59:34 -- bdev/blockdev.sh@429 -- # '[' 9 -lt 2 ']' 00:15:30.694 12:59:34 -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:15:30.694 12:59:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:30.694 12:59:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.694 12:59:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.694 12:59:34 -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:15:30.694 12:59:34 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:30.694 12:59:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:30.694 12:59:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.694 ************************************ 00:15:30.694 START TEST bdev_qos_bw 00:15:30.694 ************************************ 00:15:30.694 12:59:34 -- common/autotest_common.sh@1099 -- # run_qos_test 9 BANDWIDTH Null_1 00:15:30.694 12:59:34 -- bdev/blockdev.sh@389 -- # local qos_limit=9 00:15:30.694 12:59:34 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:30.694 12:59:34 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:15:30.694 12:59:34 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:30.694 12:59:34 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:30.694 12:59:34 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:30.694 12:59:34 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:30.694 12:59:34 -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:30.694 12:59:34 -- bdev/blockdev.sh@378 -- # tail -1 00:15:35.974 12:59:39 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2306.33 9225.32 0.00 0.00 9392.00 0.00 0.00 ' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@382 -- # iostat_result=9392.00 00:15:35.974 12:59:39 -- bdev/blockdev.sh@385 -- # echo 9392 00:15:35.974 ************************************ 00:15:35.974 END TEST bdev_qos_bw 00:15:35.974 ************************************ 00:15:35.974 12:59:39 -- bdev/blockdev.sh@392 -- # qos_result=9392 00:15:35.974 12:59:39 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@394 -- # qos_limit=9216 00:15:35.974 12:59:39 -- bdev/blockdev.sh@396 -- # lower_limit=8294 00:15:35.974 12:59:39 -- bdev/blockdev.sh@397 -- # upper_limit=10137 00:15:35.974 12:59:39 -- bdev/blockdev.sh@400 -- # '[' 9392 -lt 8294 ']' 00:15:35.974 12:59:39 -- bdev/blockdev.sh@400 -- # '[' 9392 -gt 10137 ']' 00:15:35.974 00:15:35.974 real 0m5.249s 00:15:35.974 user 0m0.111s 00:15:35.974 sys 0m0.028s 00:15:35.974 12:59:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:35.974 12:59:39 -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 12:59:39 -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:35.974 12:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.974 12:59:39 -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 12:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:35.974 12:59:39 -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:35.974 12:59:39 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:35.974 12:59:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:35.974 12:59:39 -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 ************************************ 00:15:35.974 START TEST bdev_qos_ro_bw 00:15:35.974 ************************************ 00:15:35.974 12:59:39 -- common/autotest_common.sh@1099 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:35.974 12:59:39 -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:15:35.974 12:59:39 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:35.974 12:59:39 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:15:35.974 12:59:39 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:35.974 12:59:39 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:35.974 12:59:39 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:35.974 12:59:39 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:35.974 12:59:39 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:35.974 12:59:39 -- bdev/blockdev.sh@378 -- # tail -1 00:15:41.238 12:59:45 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.77 2047.07 0.00 0.00 2060.00 0.00 0.00 ' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:15:41.238 12:59:45 -- bdev/blockdev.sh@385 -- # echo 2060 00:15:41.238 12:59:45 -- bdev/blockdev.sh@392 -- # qos_result=2060 00:15:41.238 12:59:45 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:15:41.238 12:59:45 -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:15:41.238 12:59:45 -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:15:41.238 12:59:45 -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:15:41.238 12:59:45 -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:15:41.238 00:15:41.238 real 0m5.155s 00:15:41.238 user 0m0.113s 00:15:41.238 sys 0m0.015s 00:15:41.238 12:59:45 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:41.238 ************************************ 00:15:41.238 END TEST bdev_qos_ro_bw 00:15:41.238 ************************************ 00:15:41.238 12:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:41.238 12:59:45 -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:41.238 12:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.238 12:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:41.805 12:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.805 12:59:45 -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:15:41.805 12:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.805 12:59:45 -- common/autotest_common.sh@10 -- # set +x 00:15:41.805 00:15:41.805 Latency(us) 00:15:41.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.805 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:41.805 Malloc_0 : 26.76 21989.53 85.90 0.00 0.00 11533.25 2174.60 503316.48 00:15:41.805 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:41.805 Null_1 : 26.98 22814.13 89.12 0.00 0.00 11193.94 789.41 221154.21 00:15:41.805 =================================================================================================================== 00:15:41.805 Total : 44803.66 175.01 0.00 0.00 11359.77 789.41 503316.48 00:15:41.805 0 00:15:41.805 12:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.805 12:59:45 -- bdev/blockdev.sh@461 -- # killprocess 117647 00:15:41.805 12:59:45 -- common/autotest_common.sh@924 -- # '[' -z 117647 ']' 00:15:41.805 12:59:45 -- common/autotest_common.sh@928 -- # kill -0 117647 00:15:41.805 12:59:45 -- common/autotest_common.sh@929 -- # uname 00:15:41.805 12:59:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:41.805 12:59:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 117647 00:15:41.805 killing process with pid 117647 00:15:41.805 Received shutdown signal, test time was about 27.007629 seconds 00:15:41.805 00:15:41.805 Latency(us) 00:15:41.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.805 =================================================================================================================== 00:15:41.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.805 12:59:45 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:41.805 12:59:45 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:41.805 12:59:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 117647' 00:15:41.805 12:59:45 -- common/autotest_common.sh@943 -- # kill 117647 00:15:41.805 12:59:45 -- common/autotest_common.sh@948 -- # wait 117647 00:15:43.206 ************************************ 00:15:43.206 END TEST bdev_qos 00:15:43.206 ************************************ 00:15:43.206 12:59:47 -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:15:43.206 00:15:43.206 real 0m29.661s 00:15:43.206 user 0m30.487s 00:15:43.206 sys 0m0.572s 00:15:43.206 12:59:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:43.206 12:59:47 -- common/autotest_common.sh@10 -- # set +x 00:15:43.206 12:59:47 -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:43.206 12:59:47 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:43.206 12:59:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:43.206 12:59:47 -- common/autotest_common.sh@10 -- # set +x 00:15:43.206 ************************************ 00:15:43.206 START TEST bdev_qd_sampling 00:15:43.206 ************************************ 00:15:43.206 Process bdev QD sampling period testing pid: 118185 00:15:43.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.206 12:59:47 -- common/autotest_common.sh@1099 -- # qd_sampling_test_suite '' 00:15:43.206 12:59:47 -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:15:43.206 12:59:47 -- bdev/blockdev.sh@541 -- # QD_PID=118185 00:15:43.206 12:59:47 -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 118185' 00:15:43.206 12:59:47 -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:43.206 12:59:47 -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:43.206 12:59:47 -- bdev/blockdev.sh@544 -- # waitforlisten 118185 00:15:43.206 12:59:47 -- common/autotest_common.sh@817 -- # '[' -z 118185 ']' 00:15:43.206 12:59:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.206 12:59:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:43.206 12:59:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.206 12:59:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:43.206 12:59:47 -- common/autotest_common.sh@10 -- # set +x 00:15:43.465 [2024-04-17 12:59:47.398155] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:43.465 [2024-04-17 12:59:47.398667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118185 ] 00:15:43.465 [2024-04-17 12:59:47.581674] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.723 [2024-04-17 12:59:47.791208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.723 [2024-04-17 12:59:47.791215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.291 12:59:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.291 12:59:48 -- common/autotest_common.sh@850 -- # return 0 00:15:44.291 12:59:48 -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:44.291 12:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.291 12:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:44.550 Malloc_QD 00:15:44.550 12:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.550 12:59:48 -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:15:44.550 12:59:48 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:15:44.550 12:59:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:44.550 12:59:48 -- common/autotest_common.sh@887 -- # local i 00:15:44.550 12:59:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:44.550 12:59:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:44.550 12:59:48 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:44.550 12:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.550 12:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:44.550 12:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.550 12:59:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:44.550 12:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.550 12:59:48 -- common/autotest_common.sh@10 -- # set +x 00:15:44.550 [ 00:15:44.550 { 00:15:44.550 "name": "Malloc_QD", 00:15:44.550 "aliases": [ 00:15:44.550 "667cb366-a11e-4c7d-a6c0-f648ed910ffd" 00:15:44.550 ], 00:15:44.550 "product_name": "Malloc disk", 00:15:44.550 "block_size": 512, 00:15:44.550 "num_blocks": 262144, 00:15:44.551 "uuid": "667cb366-a11e-4c7d-a6c0-f648ed910ffd", 00:15:44.551 "assigned_rate_limits": { 00:15:44.551 "rw_ios_per_sec": 0, 00:15:44.551 "rw_mbytes_per_sec": 0, 00:15:44.551 "r_mbytes_per_sec": 0, 00:15:44.551 "w_mbytes_per_sec": 0 00:15:44.551 }, 00:15:44.551 "claimed": false, 00:15:44.551 "zoned": false, 00:15:44.551 "supported_io_types": { 00:15:44.551 "read": true, 00:15:44.551 "write": true, 00:15:44.551 "unmap": true, 00:15:44.551 "write_zeroes": true, 00:15:44.551 "flush": true, 00:15:44.551 "reset": true, 00:15:44.551 "compare": false, 00:15:44.551 "compare_and_write": false, 00:15:44.551 "abort": true, 00:15:44.551 "nvme_admin": false, 00:15:44.551 "nvme_io": false 00:15:44.551 }, 00:15:44.551 "memory_domains": [ 00:15:44.551 { 00:15:44.551 "dma_device_id": "system", 00:15:44.551 "dma_device_type": 1 00:15:44.551 }, 00:15:44.551 { 00:15:44.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.551 "dma_device_type": 2 00:15:44.551 } 00:15:44.551 ], 00:15:44.551 "driver_specific": {} 00:15:44.551 } 00:15:44.551 ] 00:15:44.551 12:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.551 12:59:48 -- common/autotest_common.sh@893 -- # return 0 00:15:44.551 12:59:48 -- bdev/blockdev.sh@550 -- # sleep 2 00:15:44.551 12:59:48 -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:44.551 Running I/O for 5 seconds... 00:15:46.453 12:59:50 -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:15:46.453 12:59:50 -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:15:46.453 12:59:50 -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:15:46.453 12:59:50 -- bdev/blockdev.sh@521 -- # local iostats 00:15:46.453 12:59:50 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:46.453 12:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.453 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 12:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.453 12:59:50 -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:46.453 12:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.453 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:46.453 12:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.453 12:59:50 -- bdev/blockdev.sh@525 -- # iostats='{ 00:15:46.453 "tick_rate": 2200000000, 00:15:46.453 "ticks": 1705071711029, 00:15:46.453 "bdevs": [ 00:15:46.453 { 00:15:46.453 "name": "Malloc_QD", 00:15:46.453 "bytes_read": 844141056, 00:15:46.453 "num_read_ops": 206083, 00:15:46.453 "bytes_written": 0, 00:15:46.453 "num_write_ops": 0, 00:15:46.453 "bytes_unmapped": 0, 00:15:46.454 "num_unmap_ops": 0, 00:15:46.454 "bytes_copied": 0, 00:15:46.454 "num_copy_ops": 0, 00:15:46.454 "read_latency_ticks": 2164177772767, 00:15:46.454 "max_read_latency_ticks": 13510251, 00:15:46.454 "min_read_latency_ticks": 367479, 00:15:46.454 "write_latency_ticks": 0, 00:15:46.454 "max_write_latency_ticks": 0, 00:15:46.454 "min_write_latency_ticks": 0, 00:15:46.454 "unmap_latency_ticks": 0, 00:15:46.454 "max_unmap_latency_ticks": 0, 00:15:46.454 "min_unmap_latency_ticks": 0, 00:15:46.454 "copy_latency_ticks": 0, 00:15:46.454 "max_copy_latency_ticks": 0, 00:15:46.454 "min_copy_latency_ticks": 0, 00:15:46.454 "io_error": {}, 00:15:46.454 "queue_depth_polling_period": 10, 00:15:46.454 "queue_depth": 512, 00:15:46.454 "io_time": 20, 00:15:46.454 "weighted_io_time": 10240 00:15:46.454 } 00:15:46.454 ] 00:15:46.454 }' 00:15:46.454 12:59:50 -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:46.713 12:59:50 -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:15:46.713 12:59:50 -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:15:46.713 12:59:50 -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:15:46.713 12:59:50 -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:46.713 12:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.713 12:59:50 -- common/autotest_common.sh@10 -- # set +x 00:15:46.713 00:15:46.713 Latency(us) 00:15:46.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.713 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:46.713 Malloc_QD : 2.01 54251.64 211.92 0.00 0.00 4707.41 1176.67 6017.40 00:15:46.713 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:46.713 Malloc_QD : 2.01 52907.22 206.67 0.00 0.00 4827.25 875.05 6166.34 00:15:46.713 =================================================================================================================== 00:15:46.713 Total : 107158.87 418.59 0.00 0.00 4766.62 875.05 6166.34 00:15:46.713 0 00:15:46.713 12:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.713 12:59:50 -- bdev/blockdev.sh@554 -- # killprocess 118185 00:15:46.713 12:59:50 -- common/autotest_common.sh@924 -- # '[' -z 118185 ']' 00:15:46.713 12:59:50 -- common/autotest_common.sh@928 -- # kill -0 118185 00:15:46.714 12:59:50 -- common/autotest_common.sh@929 -- # uname 00:15:46.714 12:59:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:46.714 12:59:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118185 00:15:46.714 12:59:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:46.714 12:59:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:46.714 12:59:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118185' 00:15:46.714 killing process with pid 118185 00:15:46.714 12:59:50 -- common/autotest_common.sh@943 -- # kill 118185 00:15:46.714 Received shutdown signal, test time was about 2.148775 seconds 00:15:46.714 00:15:46.714 Latency(us) 00:15:46.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.714 =================================================================================================================== 00:15:46.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.714 12:59:50 -- common/autotest_common.sh@948 -- # wait 118185 00:15:48.091 ************************************ 00:15:48.091 END TEST bdev_qd_sampling 00:15:48.091 ************************************ 00:15:48.091 12:59:52 -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:15:48.091 00:15:48.091 real 0m4.772s 00:15:48.091 user 0m8.925s 00:15:48.091 sys 0m0.371s 00:15:48.091 12:59:52 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:15:48.091 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:48.091 12:59:52 -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:15:48.091 12:59:52 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:15:48.091 12:59:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:48.091 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:48.091 ************************************ 00:15:48.091 START TEST bdev_error 00:15:48.091 ************************************ 00:15:48.091 12:59:52 -- common/autotest_common.sh@1099 -- # error_test_suite '' 00:15:48.091 12:59:52 -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:15:48.091 12:59:52 -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:15:48.091 12:59:52 -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:15:48.091 12:59:52 -- bdev/blockdev.sh@472 -- # ERR_PID=118310 00:15:48.091 Process error testing pid: 118310 00:15:48.091 12:59:52 -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 118310' 00:15:48.091 12:59:52 -- bdev/blockdev.sh@474 -- # waitforlisten 118310 00:15:48.091 12:59:52 -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:48.091 12:59:52 -- common/autotest_common.sh@817 -- # '[' -z 118310 ']' 00:15:48.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.091 12:59:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.091 12:59:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.091 12:59:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.092 12:59:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.092 12:59:52 -- common/autotest_common.sh@10 -- # set +x 00:15:48.350 [2024-04-17 12:59:52.237399] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:48.350 [2024-04-17 12:59:52.237665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118310 ] 00:15:48.350 [2024-04-17 12:59:52.407381] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.609 [2024-04-17 12:59:52.632543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.178 12:59:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.178 12:59:53 -- common/autotest_common.sh@850 -- # return 0 00:15:49.178 12:59:53 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:49.178 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.178 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.437 Dev_1 00:15:49.437 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.437 12:59:53 -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:15:49.437 12:59:53 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:15:49.437 12:59:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:49.437 12:59:53 -- common/autotest_common.sh@887 -- # local i 00:15:49.437 12:59:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:49.437 12:59:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:49.437 12:59:53 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:49.437 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.437 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.437 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.437 12:59:53 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:49.437 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.437 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.437 [ 00:15:49.437 { 00:15:49.437 "name": "Dev_1", 00:15:49.437 "aliases": [ 00:15:49.437 "ddfd729b-6dfc-4702-a401-7fda5ff82631" 00:15:49.437 ], 00:15:49.437 "product_name": "Malloc disk", 00:15:49.437 "block_size": 512, 00:15:49.437 "num_blocks": 262144, 00:15:49.437 "uuid": "ddfd729b-6dfc-4702-a401-7fda5ff82631", 00:15:49.437 "assigned_rate_limits": { 00:15:49.437 "rw_ios_per_sec": 0, 00:15:49.437 "rw_mbytes_per_sec": 0, 00:15:49.437 "r_mbytes_per_sec": 0, 00:15:49.437 "w_mbytes_per_sec": 0 00:15:49.437 }, 00:15:49.437 "claimed": false, 00:15:49.437 "zoned": false, 00:15:49.437 "supported_io_types": { 00:15:49.437 "read": true, 00:15:49.437 "write": true, 00:15:49.437 "unmap": true, 00:15:49.437 "write_zeroes": true, 00:15:49.437 "flush": true, 00:15:49.437 "reset": true, 00:15:49.437 "compare": false, 00:15:49.437 "compare_and_write": false, 00:15:49.437 "abort": true, 00:15:49.437 "nvme_admin": false, 00:15:49.437 "nvme_io": false 00:15:49.437 }, 00:15:49.437 "memory_domains": [ 00:15:49.437 { 00:15:49.437 "dma_device_id": "system", 00:15:49.437 "dma_device_type": 1 00:15:49.437 }, 00:15:49.437 { 00:15:49.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.437 "dma_device_type": 2 00:15:49.437 } 00:15:49.437 ], 00:15:49.437 "driver_specific": {} 00:15:49.437 } 00:15:49.437 ] 00:15:49.437 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.437 12:59:53 -- common/autotest_common.sh@893 -- # return 0 00:15:49.437 12:59:53 -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:15:49.437 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.437 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.437 true 00:15:49.437 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.437 12:59:53 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:49.437 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.437 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 Dev_2 00:15:49.695 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.695 12:59:53 -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:15:49.695 12:59:53 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:15:49.695 12:59:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:49.695 12:59:53 -- common/autotest_common.sh@887 -- # local i 00:15:49.695 12:59:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:49.695 12:59:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:49.695 12:59:53 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:49.695 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.695 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.695 12:59:53 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:49.695 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.695 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.695 [ 00:15:49.695 { 00:15:49.695 "name": "Dev_2", 00:15:49.695 "aliases": [ 00:15:49.695 "6c7ac535-0992-495b-a64b-0e97620b1663" 00:15:49.695 ], 00:15:49.695 "product_name": "Malloc disk", 00:15:49.695 "block_size": 512, 00:15:49.695 "num_blocks": 262144, 00:15:49.695 "uuid": "6c7ac535-0992-495b-a64b-0e97620b1663", 00:15:49.695 "assigned_rate_limits": { 00:15:49.695 "rw_ios_per_sec": 0, 00:15:49.695 "rw_mbytes_per_sec": 0, 00:15:49.695 "r_mbytes_per_sec": 0, 00:15:49.696 "w_mbytes_per_sec": 0 00:15:49.696 }, 00:15:49.696 "claimed": false, 00:15:49.696 "zoned": false, 00:15:49.696 "supported_io_types": { 00:15:49.696 "read": true, 00:15:49.696 "write": true, 00:15:49.696 "unmap": true, 00:15:49.696 "write_zeroes": true, 00:15:49.696 "flush": true, 00:15:49.696 "reset": true, 00:15:49.696 "compare": false, 00:15:49.696 "compare_and_write": false, 00:15:49.696 "abort": true, 00:15:49.696 "nvme_admin": false, 00:15:49.696 "nvme_io": false 00:15:49.696 }, 00:15:49.696 "memory_domains": [ 00:15:49.696 { 00:15:49.696 "dma_device_id": "system", 00:15:49.696 "dma_device_type": 1 00:15:49.696 }, 00:15:49.696 { 00:15:49.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.696 "dma_device_type": 2 00:15:49.696 } 00:15:49.696 ], 00:15:49.696 "driver_specific": {} 00:15:49.696 } 00:15:49.696 ] 00:15:49.696 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.696 12:59:53 -- common/autotest_common.sh@893 -- # return 0 00:15:49.696 12:59:53 -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:49.696 12:59:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.696 12:59:53 -- common/autotest_common.sh@10 -- # set +x 00:15:49.696 12:59:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.696 12:59:53 -- bdev/blockdev.sh@484 -- # sleep 1 00:15:49.696 12:59:53 -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:49.696 Running I/O for 5 seconds... 00:15:50.630 12:59:54 -- bdev/blockdev.sh@487 -- # kill -0 118310 00:15:50.630 12:59:54 -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 118310' 00:15:50.630 Process is existed as continue on error is set. Pid: 118310 00:15:50.630 12:59:54 -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:50.630 12:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.630 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:50.631 12:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.631 12:59:54 -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:50.631 12:59:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.631 12:59:54 -- common/autotest_common.sh@10 -- # set +x 00:15:50.631 Timeout while waiting for response: 00:15:50.631 00:15:50.631 00:15:50.890 12:59:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.890 12:59:54 -- bdev/blockdev.sh@497 -- # sleep 5 00:15:55.079 00:15:55.079 Latency(us) 00:15:55.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.080 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:55.080 EE_Dev_1 : 0.90 36471.82 142.47 5.53 0.00 435.42 141.50 711.21 00:15:55.080 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:55.080 Dev_2 : 5.00 77300.42 301.95 0.00 0.00 203.98 63.77 314572.80 00:15:55.080 =================================================================================================================== 00:15:55.080 Total : 113772.24 444.42 5.53 0.00 222.17 63.77 314572.80 00:15:56.015 12:59:59 -- bdev/blockdev.sh@499 -- # killprocess 118310 00:15:56.015 12:59:59 -- common/autotest_common.sh@924 -- # '[' -z 118310 ']' 00:15:56.015 12:59:59 -- common/autotest_common.sh@928 -- # kill -0 118310 00:15:56.015 12:59:59 -- common/autotest_common.sh@929 -- # uname 00:15:56.015 12:59:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:56.015 12:59:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118310 00:15:56.015 12:59:59 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:15:56.015 killing process with pid 118310 00:15:56.015 Received shutdown signal, test time was about 5.000000 seconds 00:15:56.015 00:15:56.015 Latency(us) 00:15:56.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.016 =================================================================================================================== 00:15:56.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.016 12:59:59 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:15:56.016 12:59:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118310' 00:15:56.016 12:59:59 -- common/autotest_common.sh@943 -- # kill 118310 00:15:56.016 12:59:59 -- common/autotest_common.sh@948 -- # wait 118310 00:15:57.392 13:00:01 -- bdev/blockdev.sh@503 -- # ERR_PID=118424 00:15:57.392 13:00:01 -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 118424' 00:15:57.392 13:00:01 -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:15:57.392 Process error testing pid: 118424 00:15:57.392 13:00:01 -- bdev/blockdev.sh@505 -- # waitforlisten 118424 00:15:57.392 13:00:01 -- common/autotest_common.sh@817 -- # '[' -z 118424 ']' 00:15:57.392 13:00:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.392 13:00:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.392 13:00:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.392 13:00:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.392 13:00:01 -- common/autotest_common.sh@10 -- # set +x 00:15:57.392 [2024-04-17 13:00:01.417970] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:15:57.392 [2024-04-17 13:00:01.418232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118424 ] 00:15:57.648 [2024-04-17 13:00:01.585221] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.648 [2024-04-17 13:00:01.791299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.583 13:00:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.583 13:00:02 -- common/autotest_common.sh@850 -- # return 0 00:15:58.583 13:00:02 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:58.583 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.583 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 Dev_1 00:15:58.583 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.583 13:00:02 -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:15:58.583 13:00:02 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:15:58.583 13:00:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:58.583 13:00:02 -- common/autotest_common.sh@887 -- # local i 00:15:58.583 13:00:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:58.583 13:00:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:58.583 13:00:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:58.583 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.583 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.583 13:00:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:58.583 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.583 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 [ 00:15:58.583 { 00:15:58.583 "name": "Dev_1", 00:15:58.583 "aliases": [ 00:15:58.583 "8c9b6d02-6112-4518-a6a2-7974b814c539" 00:15:58.583 ], 00:15:58.583 "product_name": "Malloc disk", 00:15:58.583 "block_size": 512, 00:15:58.583 "num_blocks": 262144, 00:15:58.583 "uuid": "8c9b6d02-6112-4518-a6a2-7974b814c539", 00:15:58.583 "assigned_rate_limits": { 00:15:58.583 "rw_ios_per_sec": 0, 00:15:58.583 "rw_mbytes_per_sec": 0, 00:15:58.583 "r_mbytes_per_sec": 0, 00:15:58.583 "w_mbytes_per_sec": 0 00:15:58.583 }, 00:15:58.583 "claimed": false, 00:15:58.583 "zoned": false, 00:15:58.583 "supported_io_types": { 00:15:58.583 "read": true, 00:15:58.583 "write": true, 00:15:58.583 "unmap": true, 00:15:58.583 "write_zeroes": true, 00:15:58.583 "flush": true, 00:15:58.583 "reset": true, 00:15:58.583 "compare": false, 00:15:58.583 "compare_and_write": false, 00:15:58.583 "abort": true, 00:15:58.583 "nvme_admin": false, 00:15:58.583 "nvme_io": false 00:15:58.583 }, 00:15:58.583 "memory_domains": [ 00:15:58.583 { 00:15:58.583 "dma_device_id": "system", 00:15:58.583 "dma_device_type": 1 00:15:58.583 }, 00:15:58.583 { 00:15:58.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.583 "dma_device_type": 2 00:15:58.583 } 00:15:58.583 ], 00:15:58.583 "driver_specific": {} 00:15:58.583 } 00:15:58.583 ] 00:15:58.583 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.583 13:00:02 -- common/autotest_common.sh@893 -- # return 0 00:15:58.583 13:00:02 -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:15:58.583 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.583 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.583 true 00:15:58.583 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.583 13:00:02 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:58.583 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.583 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.842 Dev_2 00:15:58.842 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.842 13:00:02 -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:15:58.842 13:00:02 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:15:58.842 13:00:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:58.842 13:00:02 -- common/autotest_common.sh@887 -- # local i 00:15:58.842 13:00:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:58.842 13:00:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:58.842 13:00:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:58.842 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.842 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.842 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.842 13:00:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:58.842 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.842 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.842 [ 00:15:58.842 { 00:15:58.842 "name": "Dev_2", 00:15:58.842 "aliases": [ 00:15:58.842 "9047937d-c787-4165-bafc-ac15200d22cd" 00:15:58.842 ], 00:15:58.842 "product_name": "Malloc disk", 00:15:58.842 "block_size": 512, 00:15:58.842 "num_blocks": 262144, 00:15:58.842 "uuid": "9047937d-c787-4165-bafc-ac15200d22cd", 00:15:58.842 "assigned_rate_limits": { 00:15:58.842 "rw_ios_per_sec": 0, 00:15:58.842 "rw_mbytes_per_sec": 0, 00:15:58.842 "r_mbytes_per_sec": 0, 00:15:58.842 "w_mbytes_per_sec": 0 00:15:58.842 }, 00:15:58.842 "claimed": false, 00:15:58.842 "zoned": false, 00:15:58.842 "supported_io_types": { 00:15:58.842 "read": true, 00:15:58.842 "write": true, 00:15:58.842 "unmap": true, 00:15:58.842 "write_zeroes": true, 00:15:58.842 "flush": true, 00:15:58.842 "reset": true, 00:15:58.842 "compare": false, 00:15:58.842 "compare_and_write": false, 00:15:58.842 "abort": true, 00:15:58.842 "nvme_admin": false, 00:15:58.842 "nvme_io": false 00:15:58.842 }, 00:15:58.842 "memory_domains": [ 00:15:58.842 { 00:15:58.842 "dma_device_id": "system", 00:15:58.842 "dma_device_type": 1 00:15:58.842 }, 00:15:58.842 { 00:15:58.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.842 "dma_device_type": 2 00:15:58.842 } 00:15:58.842 ], 00:15:58.842 "driver_specific": {} 00:15:58.842 } 00:15:58.842 ] 00:15:58.843 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.843 13:00:02 -- common/autotest_common.sh@893 -- # return 0 00:15:58.843 13:00:02 -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:58.843 13:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.843 13:00:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.843 13:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.843 13:00:02 -- bdev/blockdev.sh@515 -- # NOT wait 118424 00:15:58.843 13:00:02 -- common/autotest_common.sh@638 -- # local es=0 00:15:58.843 13:00:02 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 118424 00:15:58.843 13:00:02 -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:58.843 13:00:02 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:58.843 13:00:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:58.843 13:00:02 -- common/autotest_common.sh@630 -- # type -t wait 00:15:58.843 13:00:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:58.843 13:00:02 -- common/autotest_common.sh@641 -- # wait 118424 00:15:58.843 Running I/O for 5 seconds... 00:15:58.843 task offset: 215280 on job bdev=EE_Dev_1 fails 00:15:58.843 00:15:58.843 Latency(us) 00:15:58.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.843 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:58.843 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:58.843 EE_Dev_1 : 0.00 26796.59 104.67 6090.13 0.00 389.90 154.53 707.49 00:15:58.843 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:58.843 Dev_2 : 0.00 17543.86 68.53 0.00 0.00 650.59 137.77 1206.46 00:15:58.843 =================================================================================================================== 00:15:58.843 Total : 44340.45 173.20 6090.13 0.00 531.29 137.77 1206.46 00:15:58.843 [2024-04-17 13:00:02.879256] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:58.843 request: 00:15:58.843 { 00:15:58.843 "method": "perform_tests", 00:15:58.843 "req_id": 1 00:15:58.843 } 00:15:58.843 Got JSON-RPC error response 00:15:58.843 response: 00:15:58.843 { 00:15:58.843 "code": -32603, 00:15:58.843 "message": "bdevperf failed with error Operation not permitted" 00:15:58.843 } 00:16:00.743 ************************************ 00:16:00.743 END TEST bdev_error 00:16:00.743 ************************************ 00:16:00.743 13:00:04 -- common/autotest_common.sh@641 -- # es=255 00:16:00.743 13:00:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:00.743 13:00:04 -- common/autotest_common.sh@650 -- # es=127 00:16:00.743 13:00:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:00.743 13:00:04 -- common/autotest_common.sh@658 -- # es=1 00:16:00.743 13:00:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:00.743 00:16:00.743 real 0m12.471s 00:16:00.743 user 0m12.851s 00:16:00.743 sys 0m0.798s 00:16:00.743 13:00:04 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:00.743 13:00:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.743 13:00:04 -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:16:00.743 13:00:04 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:00.743 13:00:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:00.743 13:00:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.743 ************************************ 00:16:00.743 START TEST bdev_stat 00:16:00.743 ************************************ 00:16:00.743 13:00:04 -- common/autotest_common.sh@1099 -- # stat_test_suite '' 00:16:00.743 13:00:04 -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:16:00.743 13:00:04 -- bdev/blockdev.sh@596 -- # STAT_PID=118513 00:16:00.743 Process Bdev IO statistics testing pid: 118513 00:16:00.743 13:00:04 -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 118513' 00:16:00.743 13:00:04 -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:16:00.743 13:00:04 -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:16:00.743 13:00:04 -- bdev/blockdev.sh@599 -- # waitforlisten 118513 00:16:00.743 13:00:04 -- common/autotest_common.sh@817 -- # '[' -z 118513 ']' 00:16:00.743 13:00:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.743 13:00:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.743 13:00:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.743 13:00:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.743 13:00:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.743 [2024-04-17 13:00:04.792234] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:00.743 [2024-04-17 13:00:04.792493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118513 ] 00:16:01.000 [2024-04-17 13:00:04.976912] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.257 [2024-04-17 13:00:05.187578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.257 [2024-04-17 13:00:05.187588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.823 13:00:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.823 13:00:05 -- common/autotest_common.sh@850 -- # return 0 00:16:01.823 13:00:05 -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:16:01.823 13:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.823 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 Malloc_STAT 00:16:01.823 13:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.823 13:00:05 -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:16:01.823 13:00:05 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:16:01.823 13:00:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:01.823 13:00:05 -- common/autotest_common.sh@887 -- # local i 00:16:01.823 13:00:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:01.823 13:00:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:01.823 13:00:05 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:16:01.823 13:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.823 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 13:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.823 13:00:05 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:16:01.823 13:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.823 13:00:05 -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 [ 00:16:01.823 { 00:16:01.823 "name": "Malloc_STAT", 00:16:01.823 "aliases": [ 00:16:01.823 "03b96dac-e436-4158-8c5c-cde120d7a2c0" 00:16:01.823 ], 00:16:01.823 "product_name": "Malloc disk", 00:16:01.823 "block_size": 512, 00:16:01.823 "num_blocks": 262144, 00:16:01.823 "uuid": "03b96dac-e436-4158-8c5c-cde120d7a2c0", 00:16:01.823 "assigned_rate_limits": { 00:16:01.823 "rw_ios_per_sec": 0, 00:16:01.823 "rw_mbytes_per_sec": 0, 00:16:01.823 "r_mbytes_per_sec": 0, 00:16:01.823 "w_mbytes_per_sec": 0 00:16:01.823 }, 00:16:01.823 "claimed": false, 00:16:01.823 "zoned": false, 00:16:01.823 "supported_io_types": { 00:16:01.823 "read": true, 00:16:01.823 "write": true, 00:16:01.823 "unmap": true, 00:16:01.823 "write_zeroes": true, 00:16:01.823 "flush": true, 00:16:01.823 "reset": true, 00:16:01.823 "compare": false, 00:16:01.823 "compare_and_write": false, 00:16:01.823 "abort": true, 00:16:01.823 "nvme_admin": false, 00:16:01.823 "nvme_io": false 00:16:01.823 }, 00:16:01.823 "memory_domains": [ 00:16:01.823 { 00:16:01.823 "dma_device_id": "system", 00:16:01.823 "dma_device_type": 1 00:16:01.823 }, 00:16:01.823 { 00:16:01.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.823 "dma_device_type": 2 00:16:01.823 } 00:16:01.823 ], 00:16:01.823 "driver_specific": {} 00:16:01.823 } 00:16:01.823 ] 00:16:01.823 13:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.823 13:00:05 -- common/autotest_common.sh@893 -- # return 0 00:16:01.823 13:00:05 -- bdev/blockdev.sh@605 -- # sleep 2 00:16:01.823 13:00:05 -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:02.081 Running I/O for 10 seconds... 00:16:03.993 13:00:07 -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:16:03.993 13:00:07 -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:16:03.993 13:00:07 -- bdev/blockdev.sh@560 -- # local iostats 00:16:03.993 13:00:07 -- bdev/blockdev.sh@561 -- # local io_count1 00:16:03.993 13:00:07 -- bdev/blockdev.sh@562 -- # local io_count2 00:16:03.993 13:00:07 -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:16:03.993 13:00:07 -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:16:03.993 13:00:07 -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:16:03.993 13:00:07 -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:16:03.993 13:00:07 -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:03.993 13:00:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.993 13:00:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.993 13:00:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.993 13:00:07 -- bdev/blockdev.sh@568 -- # iostats='{ 00:16:03.993 "tick_rate": 2200000000, 00:16:03.993 "ticks": 1743337841195, 00:16:03.993 "bdevs": [ 00:16:03.993 { 00:16:03.993 "name": "Malloc_STAT", 00:16:03.993 "bytes_read": 867209728, 00:16:03.993 "num_read_ops": 211715, 00:16:03.993 "bytes_written": 0, 00:16:03.993 "num_write_ops": 0, 00:16:03.993 "bytes_unmapped": 0, 00:16:03.993 "num_unmap_ops": 0, 00:16:03.993 "bytes_copied": 0, 00:16:03.993 "num_copy_ops": 0, 00:16:03.993 "read_latency_ticks": 2157540718948, 00:16:03.993 "max_read_latency_ticks": 15357169, 00:16:03.993 "min_read_latency_ticks": 378511, 00:16:03.993 "write_latency_ticks": 0, 00:16:03.993 "max_write_latency_ticks": 0, 00:16:03.993 "min_write_latency_ticks": 0, 00:16:03.993 "unmap_latency_ticks": 0, 00:16:03.993 "max_unmap_latency_ticks": 0, 00:16:03.993 "min_unmap_latency_ticks": 0, 00:16:03.993 "copy_latency_ticks": 0, 00:16:03.993 "max_copy_latency_ticks": 0, 00:16:03.993 "min_copy_latency_ticks": 0, 00:16:03.993 "io_error": {} 00:16:03.993 } 00:16:03.993 ] 00:16:03.993 }' 00:16:03.993 13:00:07 -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:16:03.993 13:00:08 -- bdev/blockdev.sh@569 -- # io_count1=211715 00:16:03.993 13:00:08 -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:03.993 13:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:03.993 13:00:08 -- common/autotest_common.sh@10 -- # set +x 00:16:03.993 13:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:03.993 13:00:08 -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:16:03.993 "tick_rate": 2200000000, 00:16:03.993 "ticks": 1743504592392, 00:16:03.993 "name": "Malloc_STAT", 00:16:03.993 "channels": [ 00:16:03.993 { 00:16:03.993 "thread_id": 2, 00:16:03.993 "bytes_read": 447741952, 00:16:03.993 "num_read_ops": 109312, 00:16:03.993 "bytes_written": 0, 00:16:03.993 "num_write_ops": 0, 00:16:03.993 "bytes_unmapped": 0, 00:16:03.993 "num_unmap_ops": 0, 00:16:03.993 "bytes_copied": 0, 00:16:03.993 "num_copy_ops": 0, 00:16:03.993 "read_latency_ticks": 1119642736985, 00:16:03.993 "max_read_latency_ticks": 15357169, 00:16:03.993 "min_read_latency_ticks": 7998157, 00:16:03.993 "write_latency_ticks": 0, 00:16:03.993 "max_write_latency_ticks": 0, 00:16:03.993 "min_write_latency_ticks": 0, 00:16:03.993 "unmap_latency_ticks": 0, 00:16:03.993 "max_unmap_latency_ticks": 0, 00:16:03.993 "min_unmap_latency_ticks": 0, 00:16:03.993 "copy_latency_ticks": 0, 00:16:03.993 "max_copy_latency_ticks": 0, 00:16:03.993 "min_copy_latency_ticks": 0 00:16:03.993 }, 00:16:03.993 { 00:16:03.993 "thread_id": 3, 00:16:03.993 "bytes_read": 451936256, 00:16:03.993 "num_read_ops": 110336, 00:16:03.993 "bytes_written": 0, 00:16:03.993 "num_write_ops": 0, 00:16:03.993 "bytes_unmapped": 0, 00:16:03.993 "num_unmap_ops": 0, 00:16:03.993 "bytes_copied": 0, 00:16:03.993 "num_copy_ops": 0, 00:16:03.993 "read_latency_ticks": 1121987118104, 00:16:03.993 "max_read_latency_ticks": 10825160, 00:16:03.993 "min_read_latency_ticks": 7782494, 00:16:03.993 "write_latency_ticks": 0, 00:16:03.993 "max_write_latency_ticks": 0, 00:16:03.993 "min_write_latency_ticks": 0, 00:16:03.993 "unmap_latency_ticks": 0, 00:16:03.993 "max_unmap_latency_ticks": 0, 00:16:03.993 "min_unmap_latency_ticks": 0, 00:16:03.993 "copy_latency_ticks": 0, 00:16:03.993 "max_copy_latency_ticks": 0, 00:16:03.993 "min_copy_latency_ticks": 0 00:16:03.993 } 00:16:03.993 ] 00:16:03.993 }' 00:16:03.993 13:00:08 -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:16:03.993 13:00:08 -- bdev/blockdev.sh@572 -- # io_count_per_channel1=109312 00:16:03.993 13:00:08 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=109312 00:16:03.993 13:00:08 -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:16:04.252 13:00:08 -- bdev/blockdev.sh@574 -- # io_count_per_channel2=110336 00:16:04.252 13:00:08 -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=219648 00:16:04.252 13:00:08 -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:04.252 13:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.252 13:00:08 -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 13:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.252 13:00:08 -- bdev/blockdev.sh@577 -- # iostats='{ 00:16:04.252 "tick_rate": 2200000000, 00:16:04.252 "ticks": 1743784938945, 00:16:04.252 "bdevs": [ 00:16:04.252 { 00:16:04.252 "name": "Malloc_STAT", 00:16:04.252 "bytes_read": 955290112, 00:16:04.252 "num_read_ops": 233219, 00:16:04.252 "bytes_written": 0, 00:16:04.252 "num_write_ops": 0, 00:16:04.252 "bytes_unmapped": 0, 00:16:04.252 "num_unmap_ops": 0, 00:16:04.252 "bytes_copied": 0, 00:16:04.252 "num_copy_ops": 0, 00:16:04.252 "read_latency_ticks": 2383512305757, 00:16:04.252 "max_read_latency_ticks": 15357169, 00:16:04.252 "min_read_latency_ticks": 378511, 00:16:04.252 "write_latency_ticks": 0, 00:16:04.252 "max_write_latency_ticks": 0, 00:16:04.252 "min_write_latency_ticks": 0, 00:16:04.252 "unmap_latency_ticks": 0, 00:16:04.252 "max_unmap_latency_ticks": 0, 00:16:04.252 "min_unmap_latency_ticks": 0, 00:16:04.252 "copy_latency_ticks": 0, 00:16:04.252 "max_copy_latency_ticks": 0, 00:16:04.252 "min_copy_latency_ticks": 0, 00:16:04.252 "io_error": {} 00:16:04.252 } 00:16:04.252 ] 00:16:04.252 }' 00:16:04.252 13:00:08 -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:16:04.252 13:00:08 -- bdev/blockdev.sh@578 -- # io_count2=233219 00:16:04.252 13:00:08 -- bdev/blockdev.sh@583 -- # '[' 219648 -lt 211715 ']' 00:16:04.252 13:00:08 -- bdev/blockdev.sh@583 -- # '[' 219648 -gt 233219 ']' 00:16:04.252 13:00:08 -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:04.252 13:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.252 13:00:08 -- common/autotest_common.sh@10 -- # set +x 00:16:04.252 00:16:04.252 Latency(us) 00:16:04.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.252 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:04.252 Malloc_STAT : 2.19 54620.70 213.36 0.00 0.00 4675.98 1176.67 7000.44 00:16:04.252 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:04.252 Malloc_STAT : 2.20 55377.64 216.32 0.00 0.00 4612.34 889.95 4944.99 00:16:04.252 =================================================================================================================== 00:16:04.252 Total : 109998.35 429.68 0.00 0.00 4643.92 889.95 7000.44 00:16:04.252 0 00:16:04.252 13:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.252 13:00:08 -- bdev/blockdev.sh@609 -- # killprocess 118513 00:16:04.252 13:00:08 -- common/autotest_common.sh@924 -- # '[' -z 118513 ']' 00:16:04.252 13:00:08 -- common/autotest_common.sh@928 -- # kill -0 118513 00:16:04.252 13:00:08 -- common/autotest_common.sh@929 -- # uname 00:16:04.252 13:00:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:04.252 13:00:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118513 00:16:04.252 13:00:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:04.252 killing process with pid 118513 00:16:04.252 Received shutdown signal, test time was about 2.330289 seconds 00:16:04.252 00:16:04.252 Latency(us) 00:16:04.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.252 =================================================================================================================== 00:16:04.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.252 13:00:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:04.252 13:00:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118513' 00:16:04.252 13:00:08 -- common/autotest_common.sh@943 -- # kill 118513 00:16:04.252 13:00:08 -- common/autotest_common.sh@948 -- # wait 118513 00:16:05.624 ************************************ 00:16:05.624 END TEST bdev_stat 00:16:05.624 ************************************ 00:16:05.624 13:00:09 -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:16:05.624 00:16:05.624 real 0m4.969s 00:16:05.624 user 0m9.434s 00:16:05.624 sys 0m0.397s 00:16:05.624 13:00:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:05.624 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 13:00:09 -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:16:05.624 13:00:09 -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:16:05.624 13:00:09 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:16:05.624 13:00:09 -- bdev/blockdev.sh@811 -- # cleanup 00:16:05.624 13:00:09 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:05.624 13:00:09 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:05.624 ************************************ 00:16:05.624 END TEST blockdev_general 00:16:05.624 ************************************ 00:16:05.624 13:00:09 -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:16:05.624 13:00:09 -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:16:05.624 13:00:09 -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:16:05.624 13:00:09 -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:16:05.624 00:16:05.624 real 2m27.963s 00:16:05.624 user 5m59.994s 00:16:05.624 sys 0m20.851s 00:16:05.624 13:00:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:05.624 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.624 13:00:09 -- spdk/autotest.sh@185 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:05.624 13:00:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:16:05.624 13:00:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:05.624 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.883 ************************************ 00:16:05.883 START TEST bdev_raid 00:16:05.883 ************************************ 00:16:05.883 13:00:09 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:05.883 * Looking for test storage... 00:16:05.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:05.883 13:00:09 -- bdev/nbd_common.sh@6 -- # set -e 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@716 -- # uname -s 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:05.883 13:00:09 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:05.883 13:00:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:05.883 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.883 ************************************ 00:16:05.883 START TEST raid_function_test_raid0 00:16:05.883 ************************************ 00:16:05.883 13:00:09 -- common/autotest_common.sh@1099 -- # raid_function_test raid0 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@86 -- # raid_pid=118680 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 118680' 00:16:05.883 Process raid pid: 118680 00:16:05.883 13:00:09 -- bdev/bdev_raid.sh@88 -- # waitforlisten 118680 /var/tmp/spdk-raid.sock 00:16:05.883 13:00:09 -- common/autotest_common.sh@817 -- # '[' -z 118680 ']' 00:16:05.883 13:00:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:05.883 13:00:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.883 13:00:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:05.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:05.883 13:00:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.883 13:00:09 -- common/autotest_common.sh@10 -- # set +x 00:16:05.884 [2024-04-17 13:00:09.992403] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:05.884 [2024-04-17 13:00:09.992764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.143 [2024-04-17 13:00:10.151882] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.401 [2024-04-17 13:00:10.365166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.661 [2024-04-17 13:00:10.566607] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.921 13:00:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.921 13:00:10 -- common/autotest_common.sh@850 -- # return 0 00:16:06.921 13:00:10 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:16:06.921 13:00:10 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:16:06.921 13:00:10 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:06.921 13:00:10 -- bdev/bdev_raid.sh@70 -- # cat 00:16:06.921 13:00:10 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:07.180 [2024-04-17 13:00:11.277911] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:07.180 [2024-04-17 13:00:11.280312] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:07.180 [2024-04-17 13:00:11.280494] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:07.180 [2024-04-17 13:00:11.280599] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:07.180 [2024-04-17 13:00:11.280781] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:07.180 [2024-04-17 13:00:11.281241] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:07.180 [2024-04-17 13:00:11.281360] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:07.180 [2024-04-17 13:00:11.281605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.180 Base_1 00:16:07.180 Base_2 00:16:07.180 13:00:11 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:07.180 13:00:11 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:07.180 13:00:11 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:07.438 13:00:11 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:07.438 13:00:11 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:07.438 13:00:11 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@12 -- # local i 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.438 13:00:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:07.697 [2024-04-17 13:00:11.742130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:07.697 /dev/nbd0 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.697 13:00:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:07.697 13:00:11 -- common/autotest_common.sh@855 -- # local i 00:16:07.697 13:00:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:07.697 13:00:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:07.697 13:00:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:07.697 13:00:11 -- common/autotest_common.sh@859 -- # break 00:16:07.697 13:00:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:07.697 13:00:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:07.697 13:00:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.697 1+0 records in 00:16:07.697 1+0 records out 00:16:07.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308582 s, 13.3 MB/s 00:16:07.697 13:00:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.697 13:00:11 -- common/autotest_common.sh@872 -- # size=4096 00:16:07.697 13:00:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.697 13:00:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:07.697 13:00:11 -- common/autotest_common.sh@875 -- # return 0 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.697 13:00:11 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:07.697 13:00:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:07.956 { 00:16:07.956 "nbd_device": "/dev/nbd0", 00:16:07.956 "bdev_name": "raid" 00:16:07.956 } 00:16:07.956 ]' 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:07.956 { 00:16:07.956 "nbd_device": "/dev/nbd0", 00:16:07.956 "bdev_name": "raid" 00:16:07.956 } 00:16:07.956 ]' 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:07.956 13:00:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:08.214 13:00:12 -- bdev/nbd_common.sh@65 -- # count=1 00:16:08.214 13:00:12 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:08.214 13:00:12 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:08.215 4096+0 records in 00:16:08.215 4096+0 records out 00:16:08.215 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0195576 s, 107 MB/s 00:16:08.215 13:00:12 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:08.473 4096+0 records in 00:16:08.473 4096+0 records out 00:16:08.473 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.287294 s, 7.3 MB/s 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:08.473 128+0 records in 00:16:08.473 128+0 records out 00:16:08.473 65536 bytes (66 kB, 64 KiB) copied, 0.000457475 s, 143 MB/s 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:08.473 2035+0 records in 00:16:08.473 2035+0 records out 00:16:08.473 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00556952 s, 187 MB/s 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:08.473 456+0 records in 00:16:08.473 456+0 records out 00:16:08.473 233472 bytes (233 kB, 228 KiB) copied, 0.00132023 s, 177 MB/s 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:08.473 13:00:12 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@51 -- # local i 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.473 13:00:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.731 [2024-04-17 13:00:12.754872] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@41 -- # break 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.731 13:00:12 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:08.731 13:00:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:08.990 13:00:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:08.990 13:00:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:08.990 13:00:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@65 -- # true 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@65 -- # count=0 00:16:09.249 13:00:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:09.249 13:00:13 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:09.249 13:00:13 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:09.249 13:00:13 -- bdev/bdev_raid.sh@111 -- # killprocess 118680 00:16:09.249 13:00:13 -- common/autotest_common.sh@924 -- # '[' -z 118680 ']' 00:16:09.249 13:00:13 -- common/autotest_common.sh@928 -- # kill -0 118680 00:16:09.249 13:00:13 -- common/autotest_common.sh@929 -- # uname 00:16:09.249 13:00:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:09.249 13:00:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118680 00:16:09.249 killing process with pid 118680 00:16:09.249 13:00:13 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:09.249 13:00:13 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:09.249 13:00:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118680' 00:16:09.249 13:00:13 -- common/autotest_common.sh@943 -- # kill 118680 00:16:09.249 13:00:13 -- common/autotest_common.sh@948 -- # wait 118680 00:16:09.249 [2024-04-17 13:00:13.158868] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.249 [2024-04-17 13:00:13.158998] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.249 [2024-04-17 13:00:13.159055] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.249 [2024-04-17 13:00:13.159066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:09.249 [2024-04-17 13:00:13.328538] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:10.625 ************************************ 00:16:10.625 END TEST raid_function_test_raid0 00:16:10.625 ************************************ 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:10.625 00:16:10.625 real 0m4.568s 00:16:10.625 user 0m5.863s 00:16:10.625 sys 0m0.896s 00:16:10.625 13:00:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:10.625 13:00:14 -- common/autotest_common.sh@10 -- # set +x 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:16:10.625 13:00:14 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:16:10.625 13:00:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:10.625 13:00:14 -- common/autotest_common.sh@10 -- # set +x 00:16:10.625 ************************************ 00:16:10.625 START TEST raid_function_test_concat 00:16:10.625 ************************************ 00:16:10.625 13:00:14 -- common/autotest_common.sh@1099 -- # raid_function_test concat 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@86 -- # raid_pid=118859 00:16:10.625 Process raid pid: 118859 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 118859' 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:10.625 13:00:14 -- bdev/bdev_raid.sh@88 -- # waitforlisten 118859 /var/tmp/spdk-raid.sock 00:16:10.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:10.625 13:00:14 -- common/autotest_common.sh@817 -- # '[' -z 118859 ']' 00:16:10.625 13:00:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:10.625 13:00:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:10.625 13:00:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:10.625 13:00:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:10.625 13:00:14 -- common/autotest_common.sh@10 -- # set +x 00:16:10.625 [2024-04-17 13:00:14.631505] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:10.625 [2024-04-17 13:00:14.631676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.884 [2024-04-17 13:00:14.786178] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.884 [2024-04-17 13:00:14.999014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.143 [2024-04-17 13:00:15.198817] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.710 13:00:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.710 13:00:15 -- common/autotest_common.sh@850 -- # return 0 00:16:11.710 13:00:15 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:16:11.710 13:00:15 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:16:11.710 13:00:15 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:11.710 13:00:15 -- bdev/bdev_raid.sh@70 -- # cat 00:16:11.710 13:00:15 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:11.969 [2024-04-17 13:00:15.920673] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:11.969 [2024-04-17 13:00:15.922783] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:11.969 [2024-04-17 13:00:15.922871] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:11.969 [2024-04-17 13:00:15.922886] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:11.969 [2024-04-17 13:00:15.923071] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:11.969 [2024-04-17 13:00:15.923452] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:11.969 [2024-04-17 13:00:15.923478] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:16:11.969 [2024-04-17 13:00:15.923663] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.969 Base_1 00:16:11.969 Base_2 00:16:11.969 13:00:15 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:11.969 13:00:15 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:11.969 13:00:15 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.228 13:00:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:12.228 13:00:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:12.228 13:00:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@12 -- # local i 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.228 13:00:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:12.486 [2024-04-17 13:00:16.504827] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:12.486 /dev/nbd0 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.486 13:00:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:12.486 13:00:16 -- common/autotest_common.sh@855 -- # local i 00:16:12.486 13:00:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:12.486 13:00:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:12.486 13:00:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:12.486 13:00:16 -- common/autotest_common.sh@859 -- # break 00:16:12.486 13:00:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:12.486 13:00:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:12.486 13:00:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.486 1+0 records in 00:16:12.486 1+0 records out 00:16:12.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266349 s, 15.4 MB/s 00:16:12.486 13:00:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.486 13:00:16 -- common/autotest_common.sh@872 -- # size=4096 00:16:12.486 13:00:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.486 13:00:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:12.486 13:00:16 -- common/autotest_common.sh@875 -- # return 0 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.486 13:00:16 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:12.486 13:00:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:12.750 { 00:16:12.750 "nbd_device": "/dev/nbd0", 00:16:12.750 "bdev_name": "raid" 00:16:12.750 } 00:16:12.750 ]' 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:12.750 { 00:16:12.750 "nbd_device": "/dev/nbd0", 00:16:12.750 "bdev_name": "raid" 00:16:12.750 } 00:16:12.750 ]' 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@65 -- # count=1 00:16:12.750 13:00:16 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:12.750 13:00:16 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:13.008 4096+0 records in 00:16:13.008 4096+0 records out 00:16:13.008 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0283223 s, 74.0 MB/s 00:16:13.008 13:00:16 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:13.008 4096+0 records in 00:16:13.008 4096+0 records out 00:16:13.008 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.248413 s, 8.4 MB/s 00:16:13.008 13:00:17 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:13.008 13:00:17 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:13.298 128+0 records in 00:16:13.298 128+0 records out 00:16:13.298 65536 bytes (66 kB, 64 KiB) copied, 0.000688555 s, 95.2 MB/s 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:13.298 2035+0 records in 00:16:13.298 2035+0 records out 00:16:13.298 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00579859 s, 180 MB/s 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:13.298 456+0 records in 00:16:13.298 456+0 records out 00:16:13.298 233472 bytes (233 kB, 228 KiB) copied, 0.00158071 s, 148 MB/s 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:13.298 13:00:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@51 -- # local i 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.298 13:00:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.557 [2024-04-17 13:00:17.534225] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@41 -- # break 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.557 13:00:17 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:13.557 13:00:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@65 -- # true 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@65 -- # count=0 00:16:13.816 13:00:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:13.816 13:00:17 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:13.816 13:00:17 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:13.816 13:00:17 -- bdev/bdev_raid.sh@111 -- # killprocess 118859 00:16:13.816 13:00:17 -- common/autotest_common.sh@924 -- # '[' -z 118859 ']' 00:16:13.816 13:00:17 -- common/autotest_common.sh@928 -- # kill -0 118859 00:16:13.816 13:00:17 -- common/autotest_common.sh@929 -- # uname 00:16:13.816 13:00:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:13.816 13:00:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118859 00:16:13.816 13:00:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:13.816 killing process with pid 118859 00:16:13.816 13:00:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:13.816 13:00:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118859' 00:16:13.816 13:00:17 -- common/autotest_common.sh@943 -- # kill 118859 00:16:13.816 13:00:17 -- common/autotest_common.sh@948 -- # wait 118859 00:16:13.816 [2024-04-17 13:00:17.863533] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.816 [2024-04-17 13:00:17.863650] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.816 [2024-04-17 13:00:17.863707] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.816 [2024-04-17 13:00:17.863718] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:16:14.075 [2024-04-17 13:00:18.032299] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.014 13:00:19 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:15.014 00:16:15.014 real 0m4.580s 00:16:15.014 user 0m6.052s 00:16:15.014 sys 0m0.840s 00:16:15.014 ************************************ 00:16:15.014 END TEST raid_function_test_concat 00:16:15.014 ************************************ 00:16:15.014 13:00:19 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:15.014 13:00:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:16:15.273 13:00:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:16:15.273 13:00:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:15.273 13:00:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.273 ************************************ 00:16:15.273 START TEST raid0_resize_test 00:16:15.273 ************************************ 00:16:15.273 13:00:19 -- common/autotest_common.sh@1099 -- # raid0_resize_test 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@301 -- # raid_pid=119018 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:15.273 Process raid pid: 119018 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 119018' 00:16:15.273 13:00:19 -- bdev/bdev_raid.sh@303 -- # waitforlisten 119018 /var/tmp/spdk-raid.sock 00:16:15.273 13:00:19 -- common/autotest_common.sh@817 -- # '[' -z 119018 ']' 00:16:15.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.273 13:00:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.273 13:00:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.273 13:00:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.273 13:00:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.273 13:00:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.273 [2024-04-17 13:00:19.279801] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:15.273 [2024-04-17 13:00:19.280002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.532 [2024-04-17 13:00:19.444201] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.790 [2024-04-17 13:00:19.698891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.790 [2024-04-17 13:00:19.903292] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.358 13:00:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.358 13:00:20 -- common/autotest_common.sh@850 -- # return 0 00:16:16.358 13:00:20 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:16.358 Base_1 00:16:16.358 13:00:20 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:16.617 Base_2 00:16:16.617 13:00:20 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:16.876 [2024-04-17 13:00:20.938980] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:16.876 [2024-04-17 13:00:20.941287] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:16.876 [2024-04-17 13:00:20.941375] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:16.876 [2024-04-17 13:00:20.941389] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:16.876 [2024-04-17 13:00:20.941563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:16:16.876 [2024-04-17 13:00:20.941925] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:16.876 [2024-04-17 13:00:20.941940] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:16:16.876 [2024-04-17 13:00:20.942131] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.876 13:00:20 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:16:17.135 [2024-04-17 13:00:21.183038] bdev_raid.c:2217:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:17.135 [2024-04-17 13:00:21.183088] bdev_raid.c:2230:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:17.135 true 00:16:17.135 13:00:21 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:17.135 13:00:21 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:16:17.393 [2024-04-17 13:00:21.439171] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.393 13:00:21 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:16:17.393 13:00:21 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:16:17.393 13:00:21 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:16:17.393 13:00:21 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:16:17.652 [2024-04-17 13:00:21.703124] bdev_raid.c:2217:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:17.652 [2024-04-17 13:00:21.703174] bdev_raid.c:2230:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:17.652 [2024-04-17 13:00:21.703227] raid0.c: 430:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:16:17.652 [2024-04-17 13:00:21.703303] bdev_raid.c:1694:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:17.652 true 00:16:17.652 13:00:21 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:16:17.652 13:00:21 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:17.925 [2024-04-17 13:00:21.979289] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.925 13:00:21 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:16:17.925 13:00:21 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:16:17.925 13:00:21 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:16:17.925 13:00:21 -- bdev/bdev_raid.sh@332 -- # killprocess 119018 00:16:17.925 13:00:21 -- common/autotest_common.sh@924 -- # '[' -z 119018 ']' 00:16:17.925 13:00:21 -- common/autotest_common.sh@928 -- # kill -0 119018 00:16:17.925 13:00:21 -- common/autotest_common.sh@929 -- # uname 00:16:17.925 13:00:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:17.925 13:00:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119018 00:16:17.925 13:00:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:17.925 killing process with pid 119018 00:16:17.925 13:00:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:17.925 13:00:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119018' 00:16:17.925 13:00:22 -- common/autotest_common.sh@943 -- # kill 119018 00:16:17.925 13:00:22 -- common/autotest_common.sh@948 -- # wait 119018 00:16:17.925 [2024-04-17 13:00:22.012537] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.925 [2024-04-17 13:00:22.012633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.925 [2024-04-17 13:00:22.012687] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.925 [2024-04-17 13:00:22.012698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:16:17.925 [2024-04-17 13:00:22.013295] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.303 13:00:23 -- bdev/bdev_raid.sh@334 -- # return 0 00:16:19.303 00:16:19.303 real 0m3.925s 00:16:19.303 user 0m5.623s 00:16:19.303 sys 0m0.510s 00:16:19.303 ************************************ 00:16:19.303 END TEST raid0_resize_test 00:16:19.303 ************************************ 00:16:19.304 13:00:23 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:19.304 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:19.304 13:00:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:19.304 13:00:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:19.304 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:16:19.304 ************************************ 00:16:19.304 START TEST raid_state_function_test 00:16:19.304 ************************************ 00:16:19.304 13:00:23 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 2 false 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:19.304 13:00:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=119128 00:16:19.305 Process raid pid: 119128 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119128' 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:19.305 13:00:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119128 /var/tmp/spdk-raid.sock 00:16:19.305 13:00:23 -- common/autotest_common.sh@817 -- # '[' -z 119128 ']' 00:16:19.305 13:00:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:19.305 13:00:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:19.305 13:00:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:19.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:19.305 13:00:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:19.305 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:16:19.306 [2024-04-17 13:00:23.289175] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:19.306 [2024-04-17 13:00:23.289383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.572 [2024-04-17 13:00:23.453492] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.572 [2024-04-17 13:00:23.663348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.840 [2024-04-17 13:00:23.863170] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.103 13:00:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:20.103 13:00:24 -- common/autotest_common.sh@850 -- # return 0 00:16:20.103 13:00:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:20.362 [2024-04-17 13:00:24.413158] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.362 [2024-04-17 13:00:24.413262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.362 [2024-04-17 13:00:24.413278] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.362 [2024-04-17 13:00:24.413300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.362 13:00:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.626 13:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.626 "name": "Existed_Raid", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "strip_size_kb": 64, 00:16:20.626 "state": "configuring", 00:16:20.626 "raid_level": "raid0", 00:16:20.626 "superblock": false, 00:16:20.626 "num_base_bdevs": 2, 00:16:20.626 "num_base_bdevs_discovered": 0, 00:16:20.626 "num_base_bdevs_operational": 2, 00:16:20.626 "base_bdevs_list": [ 00:16:20.626 { 00:16:20.626 "name": "BaseBdev1", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "is_configured": false, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 0 00:16:20.626 }, 00:16:20.626 { 00:16:20.626 "name": "BaseBdev2", 00:16:20.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.626 "is_configured": false, 00:16:20.626 "data_offset": 0, 00:16:20.626 "data_size": 0 00:16:20.626 } 00:16:20.626 ] 00:16:20.626 }' 00:16:20.626 13:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.627 13:00:24 -- common/autotest_common.sh@10 -- # set +x 00:16:21.569 13:00:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.569 [2024-04-17 13:00:25.641312] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.569 [2024-04-17 13:00:25.641367] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:21.569 13:00:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:21.828 [2024-04-17 13:00:25.885388] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.828 [2024-04-17 13:00:25.885502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.828 [2024-04-17 13:00:25.885518] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.828 [2024-04-17 13:00:25.885544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.828 13:00:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.085 [2024-04-17 13:00:26.217328] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.085 BaseBdev1 00:16:22.343 13:00:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:22.343 13:00:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:22.343 13:00:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:22.343 13:00:26 -- common/autotest_common.sh@887 -- # local i 00:16:22.343 13:00:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:22.343 13:00:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:22.343 13:00:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.343 13:00:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.601 [ 00:16:22.601 { 00:16:22.601 "name": "BaseBdev1", 00:16:22.601 "aliases": [ 00:16:22.601 "9011b753-324d-413e-8508-0adc1e54c693" 00:16:22.601 ], 00:16:22.601 "product_name": "Malloc disk", 00:16:22.601 "block_size": 512, 00:16:22.601 "num_blocks": 65536, 00:16:22.601 "uuid": "9011b753-324d-413e-8508-0adc1e54c693", 00:16:22.601 "assigned_rate_limits": { 00:16:22.601 "rw_ios_per_sec": 0, 00:16:22.601 "rw_mbytes_per_sec": 0, 00:16:22.601 "r_mbytes_per_sec": 0, 00:16:22.601 "w_mbytes_per_sec": 0 00:16:22.601 }, 00:16:22.601 "claimed": true, 00:16:22.601 "claim_type": "exclusive_write", 00:16:22.601 "zoned": false, 00:16:22.601 "supported_io_types": { 00:16:22.601 "read": true, 00:16:22.601 "write": true, 00:16:22.601 "unmap": true, 00:16:22.601 "write_zeroes": true, 00:16:22.601 "flush": true, 00:16:22.601 "reset": true, 00:16:22.601 "compare": false, 00:16:22.601 "compare_and_write": false, 00:16:22.601 "abort": true, 00:16:22.601 "nvme_admin": false, 00:16:22.601 "nvme_io": false 00:16:22.601 }, 00:16:22.601 "memory_domains": [ 00:16:22.601 { 00:16:22.601 "dma_device_id": "system", 00:16:22.601 "dma_device_type": 1 00:16:22.601 }, 00:16:22.601 { 00:16:22.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.601 "dma_device_type": 2 00:16:22.601 } 00:16:22.601 ], 00:16:22.601 "driver_specific": {} 00:16:22.601 } 00:16:22.601 ] 00:16:22.601 13:00:26 -- common/autotest_common.sh@893 -- # return 0 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:22.601 13:00:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.602 13:00:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.602 13:00:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.602 13:00:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.602 13:00:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.602 13:00:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.860 13:00:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.860 "name": "Existed_Raid", 00:16:22.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.860 "strip_size_kb": 64, 00:16:22.860 "state": "configuring", 00:16:22.860 "raid_level": "raid0", 00:16:22.860 "superblock": false, 00:16:22.860 "num_base_bdevs": 2, 00:16:22.860 "num_base_bdevs_discovered": 1, 00:16:22.860 "num_base_bdevs_operational": 2, 00:16:22.860 "base_bdevs_list": [ 00:16:22.860 { 00:16:22.860 "name": "BaseBdev1", 00:16:22.860 "uuid": "9011b753-324d-413e-8508-0adc1e54c693", 00:16:22.860 "is_configured": true, 00:16:22.860 "data_offset": 0, 00:16:22.860 "data_size": 65536 00:16:22.860 }, 00:16:22.860 { 00:16:22.860 "name": "BaseBdev2", 00:16:22.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.860 "is_configured": false, 00:16:22.860 "data_offset": 0, 00:16:22.860 "data_size": 0 00:16:22.860 } 00:16:22.860 ] 00:16:22.860 }' 00:16:22.860 13:00:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.860 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:16:23.791 13:00:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.791 [2024-04-17 13:00:27.845791] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.791 [2024-04-17 13:00:27.845864] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:23.791 13:00:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:23.791 13:00:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:24.066 [2024-04-17 13:00:28.077867] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.066 [2024-04-17 13:00:28.080064] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.066 [2024-04-17 13:00:28.080141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.066 13:00:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.328 13:00:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.328 "name": "Existed_Raid", 00:16:24.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.328 "strip_size_kb": 64, 00:16:24.328 "state": "configuring", 00:16:24.328 "raid_level": "raid0", 00:16:24.328 "superblock": false, 00:16:24.328 "num_base_bdevs": 2, 00:16:24.328 "num_base_bdevs_discovered": 1, 00:16:24.328 "num_base_bdevs_operational": 2, 00:16:24.328 "base_bdevs_list": [ 00:16:24.328 { 00:16:24.328 "name": "BaseBdev1", 00:16:24.328 "uuid": "9011b753-324d-413e-8508-0adc1e54c693", 00:16:24.328 "is_configured": true, 00:16:24.328 "data_offset": 0, 00:16:24.328 "data_size": 65536 00:16:24.328 }, 00:16:24.328 { 00:16:24.328 "name": "BaseBdev2", 00:16:24.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.328 "is_configured": false, 00:16:24.328 "data_offset": 0, 00:16:24.328 "data_size": 0 00:16:24.328 } 00:16:24.328 ] 00:16:24.328 }' 00:16:24.328 13:00:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.328 13:00:28 -- common/autotest_common.sh@10 -- # set +x 00:16:25.260 13:00:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.261 BaseBdev2 00:16:25.261 [2024-04-17 13:00:29.330749] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.261 [2024-04-17 13:00:29.330798] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:25.261 [2024-04-17 13:00:29.330821] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:25.261 [2024-04-17 13:00:29.330983] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:25.261 [2024-04-17 13:00:29.331337] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:25.261 [2024-04-17 13:00:29.331353] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:25.261 [2024-04-17 13:00:29.331637] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.261 13:00:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:25.261 13:00:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:25.261 13:00:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:25.261 13:00:29 -- common/autotest_common.sh@887 -- # local i 00:16:25.261 13:00:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:25.261 13:00:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:25.261 13:00:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.519 13:00:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.777 [ 00:16:25.777 { 00:16:25.777 "name": "BaseBdev2", 00:16:25.777 "aliases": [ 00:16:25.777 "1ba1bfdb-cceb-4ac9-889d-781ed9c26ff9" 00:16:25.777 ], 00:16:25.777 "product_name": "Malloc disk", 00:16:25.777 "block_size": 512, 00:16:25.777 "num_blocks": 65536, 00:16:25.777 "uuid": "1ba1bfdb-cceb-4ac9-889d-781ed9c26ff9", 00:16:25.777 "assigned_rate_limits": { 00:16:25.777 "rw_ios_per_sec": 0, 00:16:25.777 "rw_mbytes_per_sec": 0, 00:16:25.777 "r_mbytes_per_sec": 0, 00:16:25.777 "w_mbytes_per_sec": 0 00:16:25.777 }, 00:16:25.777 "claimed": true, 00:16:25.777 "claim_type": "exclusive_write", 00:16:25.777 "zoned": false, 00:16:25.777 "supported_io_types": { 00:16:25.777 "read": true, 00:16:25.777 "write": true, 00:16:25.777 "unmap": true, 00:16:25.777 "write_zeroes": true, 00:16:25.777 "flush": true, 00:16:25.777 "reset": true, 00:16:25.777 "compare": false, 00:16:25.777 "compare_and_write": false, 00:16:25.777 "abort": true, 00:16:25.777 "nvme_admin": false, 00:16:25.777 "nvme_io": false 00:16:25.777 }, 00:16:25.777 "memory_domains": [ 00:16:25.777 { 00:16:25.777 "dma_device_id": "system", 00:16:25.777 "dma_device_type": 1 00:16:25.777 }, 00:16:25.777 { 00:16:25.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.777 "dma_device_type": 2 00:16:25.777 } 00:16:25.777 ], 00:16:25.777 "driver_specific": {} 00:16:25.777 } 00:16:25.777 ] 00:16:25.777 13:00:29 -- common/autotest_common.sh@893 -- # return 0 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.777 13:00:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.035 13:00:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.035 "name": "Existed_Raid", 00:16:26.035 "uuid": "46d9f3c1-affc-410e-af38-6f578b1996bb", 00:16:26.035 "strip_size_kb": 64, 00:16:26.035 "state": "online", 00:16:26.035 "raid_level": "raid0", 00:16:26.035 "superblock": false, 00:16:26.035 "num_base_bdevs": 2, 00:16:26.035 "num_base_bdevs_discovered": 2, 00:16:26.035 "num_base_bdevs_operational": 2, 00:16:26.035 "base_bdevs_list": [ 00:16:26.035 { 00:16:26.035 "name": "BaseBdev1", 00:16:26.035 "uuid": "9011b753-324d-413e-8508-0adc1e54c693", 00:16:26.035 "is_configured": true, 00:16:26.035 "data_offset": 0, 00:16:26.035 "data_size": 65536 00:16:26.035 }, 00:16:26.035 { 00:16:26.035 "name": "BaseBdev2", 00:16:26.035 "uuid": "1ba1bfdb-cceb-4ac9-889d-781ed9c26ff9", 00:16:26.035 "is_configured": true, 00:16:26.035 "data_offset": 0, 00:16:26.035 "data_size": 65536 00:16:26.035 } 00:16:26.035 ] 00:16:26.035 }' 00:16:26.035 13:00:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.035 13:00:30 -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 13:00:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:26.859 [2024-04-17 13:00:30.899224] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.859 [2024-04-17 13:00:30.899267] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.859 [2024-04-17 13:00:30.899358] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.859 13:00:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.118 13:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.118 "name": "Existed_Raid", 00:16:27.118 "uuid": "46d9f3c1-affc-410e-af38-6f578b1996bb", 00:16:27.118 "strip_size_kb": 64, 00:16:27.118 "state": "offline", 00:16:27.118 "raid_level": "raid0", 00:16:27.118 "superblock": false, 00:16:27.118 "num_base_bdevs": 2, 00:16:27.118 "num_base_bdevs_discovered": 1, 00:16:27.118 "num_base_bdevs_operational": 1, 00:16:27.118 "base_bdevs_list": [ 00:16:27.118 { 00:16:27.118 "name": null, 00:16:27.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.118 "is_configured": false, 00:16:27.118 "data_offset": 0, 00:16:27.118 "data_size": 65536 00:16:27.118 }, 00:16:27.118 { 00:16:27.118 "name": "BaseBdev2", 00:16:27.118 "uuid": "1ba1bfdb-cceb-4ac9-889d-781ed9c26ff9", 00:16:27.118 "is_configured": true, 00:16:27.118 "data_offset": 0, 00:16:27.118 "data_size": 65536 00:16:27.118 } 00:16:27.118 ] 00:16:27.118 }' 00:16:27.118 13:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.118 13:00:31 -- common/autotest_common.sh@10 -- # set +x 00:16:28.053 13:00:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:28.053 13:00:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.053 13:00:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.053 13:00:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.312 13:00:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.312 13:00:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.312 13:00:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:28.312 [2024-04-17 13:00:32.430710] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.312 [2024-04-17 13:00:32.430803] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:28.572 13:00:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.572 13:00:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.572 13:00:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.572 13:00:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.840 13:00:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:28.840 13:00:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:28.840 13:00:32 -- bdev/bdev_raid.sh@287 -- # killprocess 119128 00:16:28.840 13:00:32 -- common/autotest_common.sh@924 -- # '[' -z 119128 ']' 00:16:28.840 13:00:32 -- common/autotest_common.sh@928 -- # kill -0 119128 00:16:28.840 13:00:32 -- common/autotest_common.sh@929 -- # uname 00:16:28.840 13:00:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:28.840 13:00:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119128 00:16:28.840 killing process with pid 119128 00:16:28.840 13:00:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:28.840 13:00:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:28.840 13:00:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119128' 00:16:28.840 13:00:32 -- common/autotest_common.sh@943 -- # kill 119128 00:16:28.840 13:00:32 -- common/autotest_common.sh@948 -- # wait 119128 00:16:28.840 [2024-04-17 13:00:32.781223] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.840 [2024-04-17 13:00:32.781360] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.774 ************************************ 00:16:29.774 END TEST raid_state_function_test 00:16:29.774 ************************************ 00:16:29.774 13:00:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:29.774 00:16:29.774 real 0m10.665s 00:16:29.774 user 0m18.732s 00:16:29.774 sys 0m1.135s 00:16:29.774 13:00:33 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:29.774 13:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:30.033 13:00:33 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:30.033 13:00:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:30.033 13:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.033 ************************************ 00:16:30.033 START TEST raid_state_function_test_sb 00:16:30.033 ************************************ 00:16:30.033 13:00:33 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 2 true 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.033 13:00:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=119481 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119481' 00:16:30.034 Process raid pid: 119481 00:16:30.034 13:00:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119481 /var/tmp/spdk-raid.sock 00:16:30.034 13:00:33 -- common/autotest_common.sh@817 -- # '[' -z 119481 ']' 00:16:30.034 13:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:30.034 13:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:30.034 13:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:30.034 13:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.034 13:00:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 [2024-04-17 13:00:34.027045] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:30.034 [2024-04-17 13:00:34.027207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.295 [2024-04-17 13:00:34.185498] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.295 [2024-04-17 13:00:34.395215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.557 [2024-04-17 13:00:34.593345] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.134 13:00:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.134 13:00:35 -- common/autotest_common.sh@850 -- # return 0 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:31.134 [2024-04-17 13:00:35.231600] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.134 [2024-04-17 13:00:35.231685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.134 [2024-04-17 13:00:35.231699] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.134 [2024-04-17 13:00:35.231731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.134 13:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.397 13:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.397 "name": "Existed_Raid", 00:16:31.397 "uuid": "34030055-2beb-4813-8bec-fb44119ebbc0", 00:16:31.397 "strip_size_kb": 64, 00:16:31.397 "state": "configuring", 00:16:31.397 "raid_level": "raid0", 00:16:31.397 "superblock": true, 00:16:31.397 "num_base_bdevs": 2, 00:16:31.397 "num_base_bdevs_discovered": 0, 00:16:31.397 "num_base_bdevs_operational": 2, 00:16:31.397 "base_bdevs_list": [ 00:16:31.397 { 00:16:31.397 "name": "BaseBdev1", 00:16:31.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.397 "is_configured": false, 00:16:31.397 "data_offset": 0, 00:16:31.397 "data_size": 0 00:16:31.397 }, 00:16:31.397 { 00:16:31.397 "name": "BaseBdev2", 00:16:31.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.397 "is_configured": false, 00:16:31.397 "data_offset": 0, 00:16:31.397 "data_size": 0 00:16:31.397 } 00:16:31.397 ] 00:16:31.397 }' 00:16:31.397 13:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.397 13:00:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.350 13:00:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.608 [2024-04-17 13:00:36.555717] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.608 [2024-04-17 13:00:36.555769] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:32.608 13:00:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:32.866 [2024-04-17 13:00:36.807819] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.866 [2024-04-17 13:00:36.807941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.866 [2024-04-17 13:00:36.807956] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.866 [2024-04-17 13:00:36.807981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.866 13:00:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:33.124 [2024-04-17 13:00:37.063164] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.124 BaseBdev1 00:16:33.124 13:00:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:33.124 13:00:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:33.124 13:00:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:33.124 13:00:37 -- common/autotest_common.sh@887 -- # local i 00:16:33.124 13:00:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:33.124 13:00:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:33.124 13:00:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.383 13:00:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.642 [ 00:16:33.642 { 00:16:33.642 "name": "BaseBdev1", 00:16:33.642 "aliases": [ 00:16:33.642 "4f992d3c-3b15-4dd6-ba31-fc7a932b66fc" 00:16:33.642 ], 00:16:33.642 "product_name": "Malloc disk", 00:16:33.642 "block_size": 512, 00:16:33.642 "num_blocks": 65536, 00:16:33.642 "uuid": "4f992d3c-3b15-4dd6-ba31-fc7a932b66fc", 00:16:33.642 "assigned_rate_limits": { 00:16:33.642 "rw_ios_per_sec": 0, 00:16:33.642 "rw_mbytes_per_sec": 0, 00:16:33.642 "r_mbytes_per_sec": 0, 00:16:33.642 "w_mbytes_per_sec": 0 00:16:33.642 }, 00:16:33.642 "claimed": true, 00:16:33.642 "claim_type": "exclusive_write", 00:16:33.642 "zoned": false, 00:16:33.642 "supported_io_types": { 00:16:33.642 "read": true, 00:16:33.642 "write": true, 00:16:33.642 "unmap": true, 00:16:33.642 "write_zeroes": true, 00:16:33.642 "flush": true, 00:16:33.642 "reset": true, 00:16:33.642 "compare": false, 00:16:33.642 "compare_and_write": false, 00:16:33.642 "abort": true, 00:16:33.642 "nvme_admin": false, 00:16:33.642 "nvme_io": false 00:16:33.642 }, 00:16:33.642 "memory_domains": [ 00:16:33.642 { 00:16:33.642 "dma_device_id": "system", 00:16:33.642 "dma_device_type": 1 00:16:33.642 }, 00:16:33.642 { 00:16:33.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.642 "dma_device_type": 2 00:16:33.642 } 00:16:33.642 ], 00:16:33.642 "driver_specific": {} 00:16:33.642 } 00:16:33.642 ] 00:16:33.642 13:00:37 -- common/autotest_common.sh@893 -- # return 0 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.642 13:00:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.900 13:00:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.900 "name": "Existed_Raid", 00:16:33.900 "uuid": "c07b61ab-b4fa-410e-ac6e-7a109549646a", 00:16:33.900 "strip_size_kb": 64, 00:16:33.900 "state": "configuring", 00:16:33.900 "raid_level": "raid0", 00:16:33.900 "superblock": true, 00:16:33.900 "num_base_bdevs": 2, 00:16:33.900 "num_base_bdevs_discovered": 1, 00:16:33.900 "num_base_bdevs_operational": 2, 00:16:33.900 "base_bdevs_list": [ 00:16:33.900 { 00:16:33.900 "name": "BaseBdev1", 00:16:33.900 "uuid": "4f992d3c-3b15-4dd6-ba31-fc7a932b66fc", 00:16:33.900 "is_configured": true, 00:16:33.900 "data_offset": 2048, 00:16:33.900 "data_size": 63488 00:16:33.900 }, 00:16:33.900 { 00:16:33.900 "name": "BaseBdev2", 00:16:33.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.900 "is_configured": false, 00:16:33.900 "data_offset": 0, 00:16:33.901 "data_size": 0 00:16:33.901 } 00:16:33.901 ] 00:16:33.901 }' 00:16:33.901 13:00:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.901 13:00:37 -- common/autotest_common.sh@10 -- # set +x 00:16:34.467 13:00:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.726 [2024-04-17 13:00:38.659630] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.727 [2024-04-17 13:00:38.659702] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:34.727 13:00:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:34.727 13:00:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.986 13:00:38 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.245 BaseBdev1 00:16:35.245 13:00:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:35.245 13:00:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:35.245 13:00:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:35.245 13:00:39 -- common/autotest_common.sh@887 -- # local i 00:16:35.245 13:00:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:35.245 13:00:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:35.245 13:00:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.504 13:00:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.763 [ 00:16:35.763 { 00:16:35.763 "name": "BaseBdev1", 00:16:35.763 "aliases": [ 00:16:35.763 "03f99b16-1c68-4095-ace5-59da09fe60c3" 00:16:35.763 ], 00:16:35.763 "product_name": "Malloc disk", 00:16:35.763 "block_size": 512, 00:16:35.763 "num_blocks": 65536, 00:16:35.763 "uuid": "03f99b16-1c68-4095-ace5-59da09fe60c3", 00:16:35.763 "assigned_rate_limits": { 00:16:35.763 "rw_ios_per_sec": 0, 00:16:35.763 "rw_mbytes_per_sec": 0, 00:16:35.763 "r_mbytes_per_sec": 0, 00:16:35.763 "w_mbytes_per_sec": 0 00:16:35.763 }, 00:16:35.763 "claimed": false, 00:16:35.763 "zoned": false, 00:16:35.763 "supported_io_types": { 00:16:35.763 "read": true, 00:16:35.763 "write": true, 00:16:35.763 "unmap": true, 00:16:35.763 "write_zeroes": true, 00:16:35.763 "flush": true, 00:16:35.763 "reset": true, 00:16:35.763 "compare": false, 00:16:35.763 "compare_and_write": false, 00:16:35.763 "abort": true, 00:16:35.763 "nvme_admin": false, 00:16:35.763 "nvme_io": false 00:16:35.763 }, 00:16:35.763 "memory_domains": [ 00:16:35.763 { 00:16:35.763 "dma_device_id": "system", 00:16:35.763 "dma_device_type": 1 00:16:35.763 }, 00:16:35.763 { 00:16:35.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.763 "dma_device_type": 2 00:16:35.763 } 00:16:35.763 ], 00:16:35.763 "driver_specific": {} 00:16:35.763 } 00:16:35.763 ] 00:16:35.763 13:00:39 -- common/autotest_common.sh@893 -- # return 0 00:16:35.763 13:00:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:36.022 [2024-04-17 13:00:40.013037] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.022 [2024-04-17 13:00:40.015291] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.022 [2024-04-17 13:00:40.015378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.022 13:00:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.280 13:00:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.280 "name": "Existed_Raid", 00:16:36.280 "uuid": "73edb595-b15d-4ad0-bc97-24c7bc0d7154", 00:16:36.280 "strip_size_kb": 64, 00:16:36.280 "state": "configuring", 00:16:36.280 "raid_level": "raid0", 00:16:36.280 "superblock": true, 00:16:36.280 "num_base_bdevs": 2, 00:16:36.280 "num_base_bdevs_discovered": 1, 00:16:36.280 "num_base_bdevs_operational": 2, 00:16:36.280 "base_bdevs_list": [ 00:16:36.280 { 00:16:36.280 "name": "BaseBdev1", 00:16:36.280 "uuid": "03f99b16-1c68-4095-ace5-59da09fe60c3", 00:16:36.280 "is_configured": true, 00:16:36.280 "data_offset": 2048, 00:16:36.280 "data_size": 63488 00:16:36.280 }, 00:16:36.280 { 00:16:36.280 "name": "BaseBdev2", 00:16:36.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.280 "is_configured": false, 00:16:36.280 "data_offset": 0, 00:16:36.280 "data_size": 0 00:16:36.280 } 00:16:36.280 ] 00:16:36.280 }' 00:16:36.280 13:00:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.280 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:16:37.238 13:00:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.238 [2024-04-17 13:00:41.315323] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.238 [2024-04-17 13:00:41.315580] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:37.238 [2024-04-17 13:00:41.315596] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:37.238 [2024-04-17 13:00:41.315736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:37.238 BaseBdev2 00:16:37.238 [2024-04-17 13:00:41.316135] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:37.238 [2024-04-17 13:00:41.316161] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:37.238 [2024-04-17 13:00:41.316340] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.238 13:00:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:37.238 13:00:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:37.238 13:00:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:37.238 13:00:41 -- common/autotest_common.sh@887 -- # local i 00:16:37.238 13:00:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:37.238 13:00:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:37.238 13:00:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.527 13:00:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.787 [ 00:16:37.787 { 00:16:37.787 "name": "BaseBdev2", 00:16:37.787 "aliases": [ 00:16:37.787 "9831a07b-6ea6-43d8-85a1-80cc3662734a" 00:16:37.787 ], 00:16:37.787 "product_name": "Malloc disk", 00:16:37.787 "block_size": 512, 00:16:37.787 "num_blocks": 65536, 00:16:37.787 "uuid": "9831a07b-6ea6-43d8-85a1-80cc3662734a", 00:16:37.787 "assigned_rate_limits": { 00:16:37.787 "rw_ios_per_sec": 0, 00:16:37.787 "rw_mbytes_per_sec": 0, 00:16:37.787 "r_mbytes_per_sec": 0, 00:16:37.787 "w_mbytes_per_sec": 0 00:16:37.787 }, 00:16:37.787 "claimed": true, 00:16:37.787 "claim_type": "exclusive_write", 00:16:37.787 "zoned": false, 00:16:37.787 "supported_io_types": { 00:16:37.787 "read": true, 00:16:37.787 "write": true, 00:16:37.787 "unmap": true, 00:16:37.787 "write_zeroes": true, 00:16:37.787 "flush": true, 00:16:37.787 "reset": true, 00:16:37.787 "compare": false, 00:16:37.787 "compare_and_write": false, 00:16:37.787 "abort": true, 00:16:37.787 "nvme_admin": false, 00:16:37.787 "nvme_io": false 00:16:37.787 }, 00:16:37.787 "memory_domains": [ 00:16:37.787 { 00:16:37.787 "dma_device_id": "system", 00:16:37.787 "dma_device_type": 1 00:16:37.787 }, 00:16:37.787 { 00:16:37.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.787 "dma_device_type": 2 00:16:37.787 } 00:16:37.787 ], 00:16:37.787 "driver_specific": {} 00:16:37.787 } 00:16:37.787 ] 00:16:37.787 13:00:41 -- common/autotest_common.sh@893 -- # return 0 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.787 13:00:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.046 13:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.046 "name": "Existed_Raid", 00:16:38.046 "uuid": "73edb595-b15d-4ad0-bc97-24c7bc0d7154", 00:16:38.047 "strip_size_kb": 64, 00:16:38.047 "state": "online", 00:16:38.047 "raid_level": "raid0", 00:16:38.047 "superblock": true, 00:16:38.047 "num_base_bdevs": 2, 00:16:38.047 "num_base_bdevs_discovered": 2, 00:16:38.047 "num_base_bdevs_operational": 2, 00:16:38.047 "base_bdevs_list": [ 00:16:38.047 { 00:16:38.047 "name": "BaseBdev1", 00:16:38.047 "uuid": "03f99b16-1c68-4095-ace5-59da09fe60c3", 00:16:38.047 "is_configured": true, 00:16:38.047 "data_offset": 2048, 00:16:38.047 "data_size": 63488 00:16:38.047 }, 00:16:38.047 { 00:16:38.047 "name": "BaseBdev2", 00:16:38.047 "uuid": "9831a07b-6ea6-43d8-85a1-80cc3662734a", 00:16:38.047 "is_configured": true, 00:16:38.047 "data_offset": 2048, 00:16:38.047 "data_size": 63488 00:16:38.047 } 00:16:38.047 ] 00:16:38.047 }' 00:16:38.047 13:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.047 13:00:42 -- common/autotest_common.sh@10 -- # set +x 00:16:38.623 13:00:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:38.891 [2024-04-17 13:00:42.979965] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.891 [2024-04-17 13:00:42.980005] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.891 [2024-04-17 13:00:42.980082] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.153 13:00:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.421 13:00:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.421 "name": "Existed_Raid", 00:16:39.421 "uuid": "73edb595-b15d-4ad0-bc97-24c7bc0d7154", 00:16:39.421 "strip_size_kb": 64, 00:16:39.421 "state": "offline", 00:16:39.421 "raid_level": "raid0", 00:16:39.421 "superblock": true, 00:16:39.421 "num_base_bdevs": 2, 00:16:39.421 "num_base_bdevs_discovered": 1, 00:16:39.421 "num_base_bdevs_operational": 1, 00:16:39.421 "base_bdevs_list": [ 00:16:39.421 { 00:16:39.421 "name": null, 00:16:39.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.421 "is_configured": false, 00:16:39.421 "data_offset": 2048, 00:16:39.421 "data_size": 63488 00:16:39.421 }, 00:16:39.421 { 00:16:39.421 "name": "BaseBdev2", 00:16:39.421 "uuid": "9831a07b-6ea6-43d8-85a1-80cc3662734a", 00:16:39.421 "is_configured": true, 00:16:39.421 "data_offset": 2048, 00:16:39.421 "data_size": 63488 00:16:39.421 } 00:16:39.421 ] 00:16:39.421 }' 00:16:39.421 13:00:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.421 13:00:43 -- common/autotest_common.sh@10 -- # set +x 00:16:40.084 13:00:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:40.084 13:00:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.084 13:00:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.084 13:00:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:40.342 13:00:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:40.342 13:00:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.342 13:00:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:40.600 [2024-04-17 13:00:44.510229] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.600 [2024-04-17 13:00:44.510315] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:40.600 13:00:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:40.600 13:00:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.600 13:00:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.600 13:00:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.858 13:00:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:40.858 13:00:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:40.858 13:00:44 -- bdev/bdev_raid.sh@287 -- # killprocess 119481 00:16:40.858 13:00:44 -- common/autotest_common.sh@924 -- # '[' -z 119481 ']' 00:16:40.858 13:00:44 -- common/autotest_common.sh@928 -- # kill -0 119481 00:16:40.858 13:00:44 -- common/autotest_common.sh@929 -- # uname 00:16:40.858 13:00:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:40.858 13:00:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119481 00:16:40.858 killing process with pid 119481 00:16:40.858 13:00:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:40.858 13:00:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:40.858 13:00:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119481' 00:16:40.858 13:00:44 -- common/autotest_common.sh@943 -- # kill 119481 00:16:40.858 13:00:44 -- common/autotest_common.sh@948 -- # wait 119481 00:16:40.858 [2024-04-17 13:00:44.887438] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.858 [2024-04-17 13:00:44.887559] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.230 ************************************ 00:16:42.231 END TEST raid_state_function_test_sb 00:16:42.231 ************************************ 00:16:42.231 13:00:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:42.231 00:16:42.231 real 0m12.018s 00:16:42.231 user 0m21.049s 00:16:42.231 sys 0m1.389s 00:16:42.231 13:00:45 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:42.231 13:00:45 -- common/autotest_common.sh@10 -- # set +x 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:42.231 13:00:46 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:16:42.231 13:00:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:42.231 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:16:42.231 ************************************ 00:16:42.231 START TEST raid_superblock_test 00:16:42.231 ************************************ 00:16:42.231 13:00:46 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid0 2 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=119849 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119849 /var/tmp/spdk-raid.sock 00:16:42.231 13:00:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:42.231 13:00:46 -- common/autotest_common.sh@817 -- # '[' -z 119849 ']' 00:16:42.231 13:00:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:42.231 13:00:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:42.231 13:00:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:42.231 13:00:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.231 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:16:42.231 [2024-04-17 13:00:46.131327] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:42.231 [2024-04-17 13:00:46.131535] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119849 ] 00:16:42.231 [2024-04-17 13:00:46.297413] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.488 [2024-04-17 13:00:46.506230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.747 [2024-04-17 13:00:46.701760] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.004 13:00:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.004 13:00:47 -- common/autotest_common.sh@850 -- # return 0 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.004 13:00:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:43.262 malloc1 00:16:43.262 13:00:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.520 [2024-04-17 13:00:47.606488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.520 [2024-04-17 13:00:47.606635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.520 [2024-04-17 13:00:47.606673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:43.520 [2024-04-17 13:00:47.606728] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.520 [2024-04-17 13:00:47.609262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.520 [2024-04-17 13:00:47.609328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.520 pt1 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.520 13:00:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:43.778 malloc2 00:16:43.778 13:00:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.036 [2024-04-17 13:00:48.121913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.036 [2024-04-17 13:00:48.122050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.036 [2024-04-17 13:00:48.122101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:44.036 [2024-04-17 13:00:48.122164] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.036 [2024-04-17 13:00:48.124705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.036 [2024-04-17 13:00:48.124772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.036 pt2 00:16:44.036 13:00:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:44.036 13:00:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.036 13:00:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:44.304 [2024-04-17 13:00:48.350016] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.304 [2024-04-17 13:00:48.352250] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.304 [2024-04-17 13:00:48.352493] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:44.304 [2024-04-17 13:00:48.352510] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:44.304 [2024-04-17 13:00:48.352664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:44.304 [2024-04-17 13:00:48.353078] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:44.304 [2024-04-17 13:00:48.353104] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:44.304 [2024-04-17 13:00:48.353265] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.304 13:00:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.572 13:00:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.573 "name": "raid_bdev1", 00:16:44.573 "uuid": "79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9", 00:16:44.573 "strip_size_kb": 64, 00:16:44.573 "state": "online", 00:16:44.573 "raid_level": "raid0", 00:16:44.573 "superblock": true, 00:16:44.573 "num_base_bdevs": 2, 00:16:44.573 "num_base_bdevs_discovered": 2, 00:16:44.573 "num_base_bdevs_operational": 2, 00:16:44.573 "base_bdevs_list": [ 00:16:44.573 { 00:16:44.573 "name": "pt1", 00:16:44.573 "uuid": "f052e1e4-83bf-5769-b2c1-46fa47d11b59", 00:16:44.573 "is_configured": true, 00:16:44.573 "data_offset": 2048, 00:16:44.573 "data_size": 63488 00:16:44.573 }, 00:16:44.573 { 00:16:44.573 "name": "pt2", 00:16:44.573 "uuid": "b2835d10-2a99-5d8e-8bca-d9b2fb8e8af4", 00:16:44.573 "is_configured": true, 00:16:44.573 "data_offset": 2048, 00:16:44.573 "data_size": 63488 00:16:44.573 } 00:16:44.573 ] 00:16:44.573 }' 00:16:44.573 13:00:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.573 13:00:48 -- common/autotest_common.sh@10 -- # set +x 00:16:45.516 13:00:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:45.516 13:00:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:45.516 [2024-04-17 13:00:49.546537] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.516 13:00:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9 00:16:45.516 13:00:49 -- bdev/bdev_raid.sh@380 -- # '[' -z 79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9 ']' 00:16:45.516 13:00:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:45.775 [2024-04-17 13:00:49.810306] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.775 [2024-04-17 13:00:49.810352] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.775 [2024-04-17 13:00:49.810496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.775 [2024-04-17 13:00:49.810567] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.775 [2024-04-17 13:00:49.810581] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:45.775 13:00:49 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.775 13:00:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:46.033 13:00:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:46.033 13:00:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:46.033 13:00:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.033 13:00:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:46.293 13:00:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.293 13:00:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:46.552 13:00:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:46.552 13:00:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:46.811 13:00:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:46.811 13:00:50 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:46.811 13:00:50 -- common/autotest_common.sh@638 -- # local es=0 00:16:46.811 13:00:50 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:46.811 13:00:50 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.811 13:00:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.811 13:00:50 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.811 13:00:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.811 13:00:50 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.811 13:00:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.811 13:00:50 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.811 13:00:50 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:46.811 13:00:50 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:47.069 [2024-04-17 13:00:51.070610] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:47.069 [2024-04-17 13:00:51.072761] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:47.069 [2024-04-17 13:00:51.072841] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:47.069 [2024-04-17 13:00:51.072922] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:47.069 [2024-04-17 13:00:51.072966] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.069 [2024-04-17 13:00:51.072978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:47.069 request: 00:16:47.069 { 00:16:47.069 "name": "raid_bdev1", 00:16:47.069 "raid_level": "raid0", 00:16:47.069 "base_bdevs": [ 00:16:47.069 "malloc1", 00:16:47.069 "malloc2" 00:16:47.069 ], 00:16:47.069 "superblock": false, 00:16:47.069 "strip_size_kb": 64, 00:16:47.069 "method": "bdev_raid_create", 00:16:47.069 "req_id": 1 00:16:47.069 } 00:16:47.069 Got JSON-RPC error response 00:16:47.069 response: 00:16:47.069 { 00:16:47.069 "code": -17, 00:16:47.069 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:47.069 } 00:16:47.069 13:00:51 -- common/autotest_common.sh@641 -- # es=1 00:16:47.069 13:00:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:47.069 13:00:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:47.069 13:00:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:47.069 13:00:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.069 13:00:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:47.364 13:00:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:47.364 13:00:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:47.364 13:00:51 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.622 [2024-04-17 13:00:51.614662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.622 [2024-04-17 13:00:51.614789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.622 [2024-04-17 13:00:51.614831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.622 [2024-04-17 13:00:51.614860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.622 [2024-04-17 13:00:51.617424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.622 [2024-04-17 13:00:51.617497] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.622 [2024-04-17 13:00:51.617623] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:47.622 [2024-04-17 13:00:51.617692] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.622 pt1 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.622 13:00:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.881 13:00:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.881 "name": "raid_bdev1", 00:16:47.881 "uuid": "79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9", 00:16:47.881 "strip_size_kb": 64, 00:16:47.881 "state": "configuring", 00:16:47.881 "raid_level": "raid0", 00:16:47.881 "superblock": true, 00:16:47.881 "num_base_bdevs": 2, 00:16:47.881 "num_base_bdevs_discovered": 1, 00:16:47.881 "num_base_bdevs_operational": 2, 00:16:47.881 "base_bdevs_list": [ 00:16:47.881 { 00:16:47.881 "name": "pt1", 00:16:47.881 "uuid": "f052e1e4-83bf-5769-b2c1-46fa47d11b59", 00:16:47.881 "is_configured": true, 00:16:47.881 "data_offset": 2048, 00:16:47.881 "data_size": 63488 00:16:47.881 }, 00:16:47.881 { 00:16:47.881 "name": null, 00:16:47.881 "uuid": "b2835d10-2a99-5d8e-8bca-d9b2fb8e8af4", 00:16:47.881 "is_configured": false, 00:16:47.881 "data_offset": 2048, 00:16:47.881 "data_size": 63488 00:16:47.881 } 00:16:47.881 ] 00:16:47.881 }' 00:16:47.881 13:00:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.881 13:00:51 -- common/autotest_common.sh@10 -- # set +x 00:16:48.448 13:00:52 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:48.448 13:00:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:48.448 13:00:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:48.448 13:00:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.707 [2024-04-17 13:00:52.734924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.707 [2024-04-17 13:00:52.735035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.707 [2024-04-17 13:00:52.735076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:48.707 [2024-04-17 13:00:52.735105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.707 [2024-04-17 13:00:52.735590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.707 [2024-04-17 13:00:52.735636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.707 [2024-04-17 13:00:52.735741] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:48.707 [2024-04-17 13:00:52.735769] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.707 [2024-04-17 13:00:52.735904] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:48.707 [2024-04-17 13:00:52.735924] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.707 [2024-04-17 13:00:52.736058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:48.707 [2024-04-17 13:00:52.736424] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:48.707 [2024-04-17 13:00:52.736445] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:48.707 [2024-04-17 13:00:52.736585] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.707 pt2 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.707 13:00:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.965 13:00:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.965 "name": "raid_bdev1", 00:16:48.965 "uuid": "79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9", 00:16:48.965 "strip_size_kb": 64, 00:16:48.965 "state": "online", 00:16:48.965 "raid_level": "raid0", 00:16:48.965 "superblock": true, 00:16:48.965 "num_base_bdevs": 2, 00:16:48.965 "num_base_bdevs_discovered": 2, 00:16:48.965 "num_base_bdevs_operational": 2, 00:16:48.965 "base_bdevs_list": [ 00:16:48.965 { 00:16:48.965 "name": "pt1", 00:16:48.965 "uuid": "f052e1e4-83bf-5769-b2c1-46fa47d11b59", 00:16:48.965 "is_configured": true, 00:16:48.965 "data_offset": 2048, 00:16:48.965 "data_size": 63488 00:16:48.965 }, 00:16:48.965 { 00:16:48.965 "name": "pt2", 00:16:48.965 "uuid": "b2835d10-2a99-5d8e-8bca-d9b2fb8e8af4", 00:16:48.965 "is_configured": true, 00:16:48.965 "data_offset": 2048, 00:16:48.965 "data_size": 63488 00:16:48.965 } 00:16:48.965 ] 00:16:48.965 }' 00:16:48.965 13:00:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.965 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:49.898 [2024-04-17 13:00:53.899545] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@430 -- # '[' 79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9 '!=' 79ac5ea0-a98f-4f4e-aeb1-e1aad2ba18c9 ']' 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:49.898 13:00:53 -- bdev/bdev_raid.sh@511 -- # killprocess 119849 00:16:49.898 13:00:53 -- common/autotest_common.sh@924 -- # '[' -z 119849 ']' 00:16:49.898 13:00:53 -- common/autotest_common.sh@928 -- # kill -0 119849 00:16:49.898 13:00:53 -- common/autotest_common.sh@929 -- # uname 00:16:49.898 13:00:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:49.898 13:00:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119849 00:16:49.898 13:00:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:49.898 13:00:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:49.898 13:00:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119849' 00:16:49.898 killing process with pid 119849 00:16:49.898 13:00:53 -- common/autotest_common.sh@943 -- # kill 119849 00:16:49.898 [2024-04-17 13:00:53.936659] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.898 [2024-04-17 13:00:53.936744] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.898 [2024-04-17 13:00:53.936797] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.898 [2024-04-17 13:00:53.936809] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:49.898 13:00:53 -- common/autotest_common.sh@948 -- # wait 119849 00:16:50.156 [2024-04-17 13:00:54.098383] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.091 13:00:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:51.091 00:16:51.091 real 0m9.122s 00:16:51.091 user 0m15.617s 00:16:51.091 sys 0m1.122s 00:16:51.091 13:00:55 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:16:51.091 ************************************ 00:16:51.091 END TEST raid_superblock_test 00:16:51.091 ************************************ 00:16:51.091 13:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:51.091 13:00:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:51.091 13:00:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:16:51.091 13:00:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:51.091 13:00:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:51.091 13:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:51.349 ************************************ 00:16:51.349 START TEST raid_state_function_test 00:16:51.349 ************************************ 00:16:51.349 13:00:55 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 2 false 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:51.349 13:00:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=120129 00:16:51.350 Process raid pid: 120129 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120129' 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120129 /var/tmp/spdk-raid.sock 00:16:51.350 13:00:55 -- common/autotest_common.sh@817 -- # '[' -z 120129 ']' 00:16:51.350 13:00:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:51.350 13:00:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:51.350 13:00:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:51.350 13:00:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.350 13:00:55 -- common/autotest_common.sh@10 -- # set +x 00:16:51.350 13:00:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:51.350 [2024-04-17 13:00:55.330833] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:16:51.350 [2024-04-17 13:00:55.331038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.607 [2024-04-17 13:00:55.497734] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.607 [2024-04-17 13:00:55.708058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.866 [2024-04-17 13:00:55.909541] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.439 13:00:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.439 13:00:56 -- common/autotest_common.sh@850 -- # return 0 00:16:52.439 13:00:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:52.708 [2024-04-17 13:00:56.611658] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.708 [2024-04-17 13:00:56.611947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.708 [2024-04-17 13:00:56.612053] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.708 [2024-04-17 13:00:56.612112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.708 13:00:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.966 13:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.966 "name": "Existed_Raid", 00:16:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.966 "strip_size_kb": 64, 00:16:52.966 "state": "configuring", 00:16:52.966 "raid_level": "concat", 00:16:52.966 "superblock": false, 00:16:52.966 "num_base_bdevs": 2, 00:16:52.966 "num_base_bdevs_discovered": 0, 00:16:52.966 "num_base_bdevs_operational": 2, 00:16:52.966 "base_bdevs_list": [ 00:16:52.966 { 00:16:52.966 "name": "BaseBdev1", 00:16:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.966 "is_configured": false, 00:16:52.966 "data_offset": 0, 00:16:52.966 "data_size": 0 00:16:52.966 }, 00:16:52.966 { 00:16:52.966 "name": "BaseBdev2", 00:16:52.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.966 "is_configured": false, 00:16:52.966 "data_offset": 0, 00:16:52.966 "data_size": 0 00:16:52.966 } 00:16:52.966 ] 00:16:52.966 }' 00:16:52.966 13:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.966 13:00:56 -- common/autotest_common.sh@10 -- # set +x 00:16:53.531 13:00:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.790 [2024-04-17 13:00:57.808309] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.790 [2024-04-17 13:00:57.808575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:53.790 13:00:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:54.048 [2024-04-17 13:00:58.028368] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.048 [2024-04-17 13:00:58.028691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.049 [2024-04-17 13:00:58.028812] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.049 [2024-04-17 13:00:58.028876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.049 13:00:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.306 [2024-04-17 13:00:58.297067] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.306 BaseBdev1 00:16:54.307 13:00:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:54.307 13:00:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:54.307 13:00:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:54.307 13:00:58 -- common/autotest_common.sh@887 -- # local i 00:16:54.307 13:00:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:54.307 13:00:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:54.307 13:00:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.579 13:00:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.837 [ 00:16:54.837 { 00:16:54.837 "name": "BaseBdev1", 00:16:54.837 "aliases": [ 00:16:54.837 "eff025c5-f387-4892-b8ec-7583c892f18e" 00:16:54.837 ], 00:16:54.837 "product_name": "Malloc disk", 00:16:54.837 "block_size": 512, 00:16:54.837 "num_blocks": 65536, 00:16:54.837 "uuid": "eff025c5-f387-4892-b8ec-7583c892f18e", 00:16:54.837 "assigned_rate_limits": { 00:16:54.837 "rw_ios_per_sec": 0, 00:16:54.837 "rw_mbytes_per_sec": 0, 00:16:54.837 "r_mbytes_per_sec": 0, 00:16:54.837 "w_mbytes_per_sec": 0 00:16:54.837 }, 00:16:54.837 "claimed": true, 00:16:54.837 "claim_type": "exclusive_write", 00:16:54.837 "zoned": false, 00:16:54.837 "supported_io_types": { 00:16:54.837 "read": true, 00:16:54.837 "write": true, 00:16:54.837 "unmap": true, 00:16:54.837 "write_zeroes": true, 00:16:54.837 "flush": true, 00:16:54.837 "reset": true, 00:16:54.837 "compare": false, 00:16:54.837 "compare_and_write": false, 00:16:54.837 "abort": true, 00:16:54.837 "nvme_admin": false, 00:16:54.837 "nvme_io": false 00:16:54.837 }, 00:16:54.837 "memory_domains": [ 00:16:54.837 { 00:16:54.837 "dma_device_id": "system", 00:16:54.837 "dma_device_type": 1 00:16:54.837 }, 00:16:54.837 { 00:16:54.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.837 "dma_device_type": 2 00:16:54.837 } 00:16:54.837 ], 00:16:54.837 "driver_specific": {} 00:16:54.837 } 00:16:54.837 ] 00:16:54.837 13:00:58 -- common/autotest_common.sh@893 -- # return 0 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.837 13:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.095 13:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.095 "name": "Existed_Raid", 00:16:55.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.095 "strip_size_kb": 64, 00:16:55.095 "state": "configuring", 00:16:55.095 "raid_level": "concat", 00:16:55.095 "superblock": false, 00:16:55.095 "num_base_bdevs": 2, 00:16:55.095 "num_base_bdevs_discovered": 1, 00:16:55.095 "num_base_bdevs_operational": 2, 00:16:55.095 "base_bdevs_list": [ 00:16:55.095 { 00:16:55.095 "name": "BaseBdev1", 00:16:55.095 "uuid": "eff025c5-f387-4892-b8ec-7583c892f18e", 00:16:55.095 "is_configured": true, 00:16:55.095 "data_offset": 0, 00:16:55.095 "data_size": 65536 00:16:55.095 }, 00:16:55.095 { 00:16:55.095 "name": "BaseBdev2", 00:16:55.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.095 "is_configured": false, 00:16:55.095 "data_offset": 0, 00:16:55.095 "data_size": 0 00:16:55.095 } 00:16:55.095 ] 00:16:55.095 }' 00:16:55.095 13:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.095 13:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 13:00:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:55.919 [2024-04-17 13:01:00.013594] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:55.919 [2024-04-17 13:01:00.013823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:55.919 13:01:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:55.919 13:01:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:56.177 [2024-04-17 13:01:00.237716] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.177 [2024-04-17 13:01:00.240099] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.177 [2024-04-17 13:01:00.240281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.177 13:01:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.435 13:01:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.435 "name": "Existed_Raid", 00:16:56.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.435 "strip_size_kb": 64, 00:16:56.435 "state": "configuring", 00:16:56.435 "raid_level": "concat", 00:16:56.435 "superblock": false, 00:16:56.435 "num_base_bdevs": 2, 00:16:56.435 "num_base_bdevs_discovered": 1, 00:16:56.435 "num_base_bdevs_operational": 2, 00:16:56.435 "base_bdevs_list": [ 00:16:56.435 { 00:16:56.435 "name": "BaseBdev1", 00:16:56.435 "uuid": "eff025c5-f387-4892-b8ec-7583c892f18e", 00:16:56.435 "is_configured": true, 00:16:56.435 "data_offset": 0, 00:16:56.435 "data_size": 65536 00:16:56.435 }, 00:16:56.435 { 00:16:56.435 "name": "BaseBdev2", 00:16:56.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.435 "is_configured": false, 00:16:56.435 "data_offset": 0, 00:16:56.435 "data_size": 0 00:16:56.435 } 00:16:56.435 ] 00:16:56.435 }' 00:16:56.435 13:01:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.435 13:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:57.372 13:01:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.372 [2024-04-17 13:01:01.508599] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.372 [2024-04-17 13:01:01.508670] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:57.372 [2024-04-17 13:01:01.508709] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:57.372 [2024-04-17 13:01:01.508843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:16:57.372 [2024-04-17 13:01:01.509219] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:57.372 [2024-04-17 13:01:01.509245] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:57.372 [2024-04-17 13:01:01.509521] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.372 BaseBdev2 00:16:57.631 13:01:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:57.631 13:01:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:57.631 13:01:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:57.631 13:01:01 -- common/autotest_common.sh@887 -- # local i 00:16:57.631 13:01:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:57.631 13:01:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:57.631 13:01:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.631 13:01:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.197 [ 00:16:58.197 { 00:16:58.197 "name": "BaseBdev2", 00:16:58.198 "aliases": [ 00:16:58.198 "e47807fa-7d1c-4f56-9515-8038ff854a74" 00:16:58.198 ], 00:16:58.198 "product_name": "Malloc disk", 00:16:58.198 "block_size": 512, 00:16:58.198 "num_blocks": 65536, 00:16:58.198 "uuid": "e47807fa-7d1c-4f56-9515-8038ff854a74", 00:16:58.198 "assigned_rate_limits": { 00:16:58.198 "rw_ios_per_sec": 0, 00:16:58.198 "rw_mbytes_per_sec": 0, 00:16:58.198 "r_mbytes_per_sec": 0, 00:16:58.198 "w_mbytes_per_sec": 0 00:16:58.198 }, 00:16:58.198 "claimed": true, 00:16:58.198 "claim_type": "exclusive_write", 00:16:58.198 "zoned": false, 00:16:58.198 "supported_io_types": { 00:16:58.198 "read": true, 00:16:58.198 "write": true, 00:16:58.198 "unmap": true, 00:16:58.198 "write_zeroes": true, 00:16:58.198 "flush": true, 00:16:58.198 "reset": true, 00:16:58.198 "compare": false, 00:16:58.198 "compare_and_write": false, 00:16:58.198 "abort": true, 00:16:58.198 "nvme_admin": false, 00:16:58.198 "nvme_io": false 00:16:58.198 }, 00:16:58.198 "memory_domains": [ 00:16:58.198 { 00:16:58.198 "dma_device_id": "system", 00:16:58.198 "dma_device_type": 1 00:16:58.198 }, 00:16:58.198 { 00:16:58.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.198 "dma_device_type": 2 00:16:58.198 } 00:16:58.198 ], 00:16:58.198 "driver_specific": {} 00:16:58.198 } 00:16:58.198 ] 00:16:58.198 13:01:02 -- common/autotest_common.sh@893 -- # return 0 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.198 "name": "Existed_Raid", 00:16:58.198 "uuid": "4b94071a-16f4-4e41-ab73-2c29dae15374", 00:16:58.198 "strip_size_kb": 64, 00:16:58.198 "state": "online", 00:16:58.198 "raid_level": "concat", 00:16:58.198 "superblock": false, 00:16:58.198 "num_base_bdevs": 2, 00:16:58.198 "num_base_bdevs_discovered": 2, 00:16:58.198 "num_base_bdevs_operational": 2, 00:16:58.198 "base_bdevs_list": [ 00:16:58.198 { 00:16:58.198 "name": "BaseBdev1", 00:16:58.198 "uuid": "eff025c5-f387-4892-b8ec-7583c892f18e", 00:16:58.198 "is_configured": true, 00:16:58.198 "data_offset": 0, 00:16:58.198 "data_size": 65536 00:16:58.198 }, 00:16:58.198 { 00:16:58.198 "name": "BaseBdev2", 00:16:58.198 "uuid": "e47807fa-7d1c-4f56-9515-8038ff854a74", 00:16:58.198 "is_configured": true, 00:16:58.198 "data_offset": 0, 00:16:58.198 "data_size": 65536 00:16:58.198 } 00:16:58.198 ] 00:16:58.198 }' 00:16:58.198 13:01:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.198 13:01:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.134 13:01:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:59.393 [2024-04-17 13:01:03.297198] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.393 [2024-04-17 13:01:03.297234] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.393 [2024-04-17 13:01:03.297301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.393 13:01:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.652 13:01:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.652 "name": "Existed_Raid", 00:16:59.652 "uuid": "4b94071a-16f4-4e41-ab73-2c29dae15374", 00:16:59.652 "strip_size_kb": 64, 00:16:59.652 "state": "offline", 00:16:59.652 "raid_level": "concat", 00:16:59.652 "superblock": false, 00:16:59.652 "num_base_bdevs": 2, 00:16:59.652 "num_base_bdevs_discovered": 1, 00:16:59.652 "num_base_bdevs_operational": 1, 00:16:59.652 "base_bdevs_list": [ 00:16:59.652 { 00:16:59.652 "name": null, 00:16:59.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.652 "is_configured": false, 00:16:59.652 "data_offset": 0, 00:16:59.652 "data_size": 65536 00:16:59.652 }, 00:16:59.652 { 00:16:59.652 "name": "BaseBdev2", 00:16:59.652 "uuid": "e47807fa-7d1c-4f56-9515-8038ff854a74", 00:16:59.652 "is_configured": true, 00:16:59.652 "data_offset": 0, 00:16:59.652 "data_size": 65536 00:16:59.652 } 00:16:59.652 ] 00:16:59.652 }' 00:16:59.652 13:01:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.652 13:01:03 -- common/autotest_common.sh@10 -- # set +x 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.588 13:01:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:00.846 [2024-04-17 13:01:04.895227] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.846 [2024-04-17 13:01:04.895326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:01.104 13:01:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:01.104 13:01:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:01.104 13:01:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.104 13:01:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:01.391 13:01:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:01.391 13:01:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:01.391 13:01:05 -- bdev/bdev_raid.sh@287 -- # killprocess 120129 00:17:01.391 13:01:05 -- common/autotest_common.sh@924 -- # '[' -z 120129 ']' 00:17:01.391 13:01:05 -- common/autotest_common.sh@928 -- # kill -0 120129 00:17:01.391 13:01:05 -- common/autotest_common.sh@929 -- # uname 00:17:01.391 13:01:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:01.391 13:01:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120129 00:17:01.391 killing process with pid 120129 00:17:01.391 13:01:05 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:01.391 13:01:05 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:01.391 13:01:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120129' 00:17:01.391 13:01:05 -- common/autotest_common.sh@943 -- # kill 120129 00:17:01.391 13:01:05 -- common/autotest_common.sh@948 -- # wait 120129 00:17:01.391 [2024-04-17 13:01:05.306806] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.391 [2024-04-17 13:01:05.306929] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.343 ************************************ 00:17:02.343 END TEST raid_state_function_test 00:17:02.344 ************************************ 00:17:02.344 13:01:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:02.344 00:17:02.344 real 0m11.199s 00:17:02.344 user 0m19.622s 00:17:02.344 sys 0m1.267s 00:17:02.344 13:01:06 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:02.344 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:02.602 13:01:06 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:02.602 13:01:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:02.602 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:02.602 ************************************ 00:17:02.602 START TEST raid_state_function_test_sb 00:17:02.602 ************************************ 00:17:02.602 13:01:06 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 2 true 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=120476 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120476' 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:02.602 Process raid pid: 120476 00:17:02.602 13:01:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120476 /var/tmp/spdk-raid.sock 00:17:02.602 13:01:06 -- common/autotest_common.sh@817 -- # '[' -z 120476 ']' 00:17:02.602 13:01:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.602 13:01:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.602 13:01:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.602 13:01:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.602 13:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:02.602 [2024-04-17 13:01:06.618305] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:02.602 [2024-04-17 13:01:06.618751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.861 [2024-04-17 13:01:06.786961] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.120 [2024-04-17 13:01:07.041528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.120 [2024-04-17 13:01:07.245884] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.688 13:01:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:03.688 13:01:07 -- common/autotest_common.sh@850 -- # return 0 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.688 [2024-04-17 13:01:07.807734] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.688 [2024-04-17 13:01:07.808040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.688 [2024-04-17 13:01:07.808155] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.688 [2024-04-17 13:01:07.808215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.688 13:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.948 13:01:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.948 "name": "Existed_Raid", 00:17:03.948 "uuid": "0712865c-be85-4aef-9466-18584c1d6ef1", 00:17:03.948 "strip_size_kb": 64, 00:17:03.948 "state": "configuring", 00:17:03.948 "raid_level": "concat", 00:17:03.948 "superblock": true, 00:17:03.948 "num_base_bdevs": 2, 00:17:03.948 "num_base_bdevs_discovered": 0, 00:17:03.948 "num_base_bdevs_operational": 2, 00:17:03.948 "base_bdevs_list": [ 00:17:03.948 { 00:17:03.948 "name": "BaseBdev1", 00:17:03.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.948 "is_configured": false, 00:17:03.948 "data_offset": 0, 00:17:03.948 "data_size": 0 00:17:03.948 }, 00:17:03.948 { 00:17:03.948 "name": "BaseBdev2", 00:17:03.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.948 "is_configured": false, 00:17:03.948 "data_offset": 0, 00:17:03.948 "data_size": 0 00:17:03.948 } 00:17:03.948 ] 00:17:03.948 }' 00:17:03.948 13:01:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.948 13:01:08 -- common/autotest_common.sh@10 -- # set +x 00:17:04.885 13:01:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.885 [2024-04-17 13:01:08.959843] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.885 [2024-04-17 13:01:08.959903] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:04.885 13:01:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:05.143 [2024-04-17 13:01:09.187932] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.143 [2024-04-17 13:01:09.188217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.143 [2024-04-17 13:01:09.188329] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.144 [2024-04-17 13:01:09.188392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.144 13:01:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.401 [2024-04-17 13:01:09.496536] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.401 BaseBdev1 00:17:05.401 13:01:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:05.401 13:01:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:05.401 13:01:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:05.401 13:01:09 -- common/autotest_common.sh@887 -- # local i 00:17:05.401 13:01:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:05.401 13:01:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:05.401 13:01:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.659 13:01:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.930 [ 00:17:05.930 { 00:17:05.930 "name": "BaseBdev1", 00:17:05.930 "aliases": [ 00:17:05.930 "87481680-21ee-4bb7-94df-b9ad6ecff9d3" 00:17:05.930 ], 00:17:05.930 "product_name": "Malloc disk", 00:17:05.930 "block_size": 512, 00:17:05.930 "num_blocks": 65536, 00:17:05.930 "uuid": "87481680-21ee-4bb7-94df-b9ad6ecff9d3", 00:17:05.930 "assigned_rate_limits": { 00:17:05.930 "rw_ios_per_sec": 0, 00:17:05.930 "rw_mbytes_per_sec": 0, 00:17:05.930 "r_mbytes_per_sec": 0, 00:17:05.930 "w_mbytes_per_sec": 0 00:17:05.930 }, 00:17:05.930 "claimed": true, 00:17:05.930 "claim_type": "exclusive_write", 00:17:05.930 "zoned": false, 00:17:05.930 "supported_io_types": { 00:17:05.930 "read": true, 00:17:05.930 "write": true, 00:17:05.930 "unmap": true, 00:17:05.930 "write_zeroes": true, 00:17:05.930 "flush": true, 00:17:05.930 "reset": true, 00:17:05.930 "compare": false, 00:17:05.930 "compare_and_write": false, 00:17:05.930 "abort": true, 00:17:05.930 "nvme_admin": false, 00:17:05.930 "nvme_io": false 00:17:05.930 }, 00:17:05.930 "memory_domains": [ 00:17:05.930 { 00:17:05.930 "dma_device_id": "system", 00:17:05.930 "dma_device_type": 1 00:17:05.930 }, 00:17:05.930 { 00:17:05.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.930 "dma_device_type": 2 00:17:05.930 } 00:17:05.930 ], 00:17:05.930 "driver_specific": {} 00:17:05.930 } 00:17:05.930 ] 00:17:05.930 13:01:10 -- common/autotest_common.sh@893 -- # return 0 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.930 13:01:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.217 13:01:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.217 "name": "Existed_Raid", 00:17:06.217 "uuid": "ddb123a4-287f-4595-bdc3-203b0ccc0128", 00:17:06.217 "strip_size_kb": 64, 00:17:06.217 "state": "configuring", 00:17:06.217 "raid_level": "concat", 00:17:06.217 "superblock": true, 00:17:06.217 "num_base_bdevs": 2, 00:17:06.217 "num_base_bdevs_discovered": 1, 00:17:06.217 "num_base_bdevs_operational": 2, 00:17:06.217 "base_bdevs_list": [ 00:17:06.217 { 00:17:06.217 "name": "BaseBdev1", 00:17:06.217 "uuid": "87481680-21ee-4bb7-94df-b9ad6ecff9d3", 00:17:06.217 "is_configured": true, 00:17:06.217 "data_offset": 2048, 00:17:06.217 "data_size": 63488 00:17:06.217 }, 00:17:06.217 { 00:17:06.217 "name": "BaseBdev2", 00:17:06.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.217 "is_configured": false, 00:17:06.217 "data_offset": 0, 00:17:06.217 "data_size": 0 00:17:06.217 } 00:17:06.217 ] 00:17:06.217 }' 00:17:06.217 13:01:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.217 13:01:10 -- common/autotest_common.sh@10 -- # set +x 00:17:07.152 13:01:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:07.411 [2024-04-17 13:01:11.353034] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.411 [2024-04-17 13:01:11.353299] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:07.411 13:01:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:07.411 13:01:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:07.669 13:01:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.927 BaseBdev1 00:17:07.927 13:01:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:07.927 13:01:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:07.927 13:01:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:07.927 13:01:11 -- common/autotest_common.sh@887 -- # local i 00:17:07.927 13:01:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:07.927 13:01:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:07.927 13:01:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.196 13:01:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.454 [ 00:17:08.454 { 00:17:08.454 "name": "BaseBdev1", 00:17:08.454 "aliases": [ 00:17:08.454 "75b4678f-8ef5-43ae-ab9e-840a47b7d2a6" 00:17:08.454 ], 00:17:08.454 "product_name": "Malloc disk", 00:17:08.454 "block_size": 512, 00:17:08.454 "num_blocks": 65536, 00:17:08.454 "uuid": "75b4678f-8ef5-43ae-ab9e-840a47b7d2a6", 00:17:08.454 "assigned_rate_limits": { 00:17:08.454 "rw_ios_per_sec": 0, 00:17:08.454 "rw_mbytes_per_sec": 0, 00:17:08.454 "r_mbytes_per_sec": 0, 00:17:08.454 "w_mbytes_per_sec": 0 00:17:08.454 }, 00:17:08.454 "claimed": false, 00:17:08.454 "zoned": false, 00:17:08.454 "supported_io_types": { 00:17:08.454 "read": true, 00:17:08.454 "write": true, 00:17:08.454 "unmap": true, 00:17:08.454 "write_zeroes": true, 00:17:08.454 "flush": true, 00:17:08.454 "reset": true, 00:17:08.454 "compare": false, 00:17:08.454 "compare_and_write": false, 00:17:08.454 "abort": true, 00:17:08.454 "nvme_admin": false, 00:17:08.454 "nvme_io": false 00:17:08.454 }, 00:17:08.454 "memory_domains": [ 00:17:08.454 { 00:17:08.454 "dma_device_id": "system", 00:17:08.454 "dma_device_type": 1 00:17:08.454 }, 00:17:08.454 { 00:17:08.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.454 "dma_device_type": 2 00:17:08.454 } 00:17:08.454 ], 00:17:08.454 "driver_specific": {} 00:17:08.454 } 00:17:08.454 ] 00:17:08.455 13:01:12 -- common/autotest_common.sh@893 -- # return 0 00:17:08.455 13:01:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:08.723 [2024-04-17 13:01:12.825574] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.723 [2024-04-17 13:01:12.827852] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.723 [2024-04-17 13:01:12.828046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.723 13:01:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:08.723 13:01:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:08.723 13:01:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.724 13:01:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.981 13:01:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.981 "name": "Existed_Raid", 00:17:08.981 "uuid": "a98c1967-bf36-42b2-87fc-70b48252a6a3", 00:17:08.981 "strip_size_kb": 64, 00:17:08.981 "state": "configuring", 00:17:08.981 "raid_level": "concat", 00:17:08.981 "superblock": true, 00:17:08.981 "num_base_bdevs": 2, 00:17:08.981 "num_base_bdevs_discovered": 1, 00:17:08.981 "num_base_bdevs_operational": 2, 00:17:08.981 "base_bdevs_list": [ 00:17:08.981 { 00:17:08.981 "name": "BaseBdev1", 00:17:08.981 "uuid": "75b4678f-8ef5-43ae-ab9e-840a47b7d2a6", 00:17:08.981 "is_configured": true, 00:17:08.981 "data_offset": 2048, 00:17:08.981 "data_size": 63488 00:17:08.981 }, 00:17:08.981 { 00:17:08.981 "name": "BaseBdev2", 00:17:08.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.981 "is_configured": false, 00:17:08.981 "data_offset": 0, 00:17:08.981 "data_size": 0 00:17:08.981 } 00:17:08.981 ] 00:17:08.981 }' 00:17:08.981 13:01:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.981 13:01:13 -- common/autotest_common.sh@10 -- # set +x 00:17:09.935 13:01:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.193 [2024-04-17 13:01:14.118521] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.193 [2024-04-17 13:01:14.119027] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:10.193 [2024-04-17 13:01:14.119155] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.193 BaseBdev2 00:17:10.193 [2024-04-17 13:01:14.119333] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:10.193 [2024-04-17 13:01:14.119869] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:10.193 [2024-04-17 13:01:14.119995] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:10.193 [2024-04-17 13:01:14.120246] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.193 13:01:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:10.193 13:01:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:10.193 13:01:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:10.193 13:01:14 -- common/autotest_common.sh@887 -- # local i 00:17:10.193 13:01:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:10.193 13:01:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:10.193 13:01:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.452 13:01:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.452 [ 00:17:10.452 { 00:17:10.452 "name": "BaseBdev2", 00:17:10.452 "aliases": [ 00:17:10.452 "3744bd0b-88f0-4d82-aa6d-264f761b6d3a" 00:17:10.452 ], 00:17:10.452 "product_name": "Malloc disk", 00:17:10.452 "block_size": 512, 00:17:10.452 "num_blocks": 65536, 00:17:10.452 "uuid": "3744bd0b-88f0-4d82-aa6d-264f761b6d3a", 00:17:10.452 "assigned_rate_limits": { 00:17:10.452 "rw_ios_per_sec": 0, 00:17:10.452 "rw_mbytes_per_sec": 0, 00:17:10.452 "r_mbytes_per_sec": 0, 00:17:10.452 "w_mbytes_per_sec": 0 00:17:10.452 }, 00:17:10.452 "claimed": true, 00:17:10.452 "claim_type": "exclusive_write", 00:17:10.452 "zoned": false, 00:17:10.452 "supported_io_types": { 00:17:10.452 "read": true, 00:17:10.452 "write": true, 00:17:10.452 "unmap": true, 00:17:10.452 "write_zeroes": true, 00:17:10.452 "flush": true, 00:17:10.452 "reset": true, 00:17:10.452 "compare": false, 00:17:10.452 "compare_and_write": false, 00:17:10.452 "abort": true, 00:17:10.452 "nvme_admin": false, 00:17:10.452 "nvme_io": false 00:17:10.452 }, 00:17:10.452 "memory_domains": [ 00:17:10.452 { 00:17:10.452 "dma_device_id": "system", 00:17:10.452 "dma_device_type": 1 00:17:10.452 }, 00:17:10.452 { 00:17:10.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.452 "dma_device_type": 2 00:17:10.452 } 00:17:10.452 ], 00:17:10.452 "driver_specific": {} 00:17:10.452 } 00:17:10.452 ] 00:17:10.452 13:01:14 -- common/autotest_common.sh@893 -- # return 0 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.452 13:01:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.019 13:01:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.019 "name": "Existed_Raid", 00:17:11.019 "uuid": "a98c1967-bf36-42b2-87fc-70b48252a6a3", 00:17:11.019 "strip_size_kb": 64, 00:17:11.019 "state": "online", 00:17:11.019 "raid_level": "concat", 00:17:11.019 "superblock": true, 00:17:11.019 "num_base_bdevs": 2, 00:17:11.019 "num_base_bdevs_discovered": 2, 00:17:11.019 "num_base_bdevs_operational": 2, 00:17:11.019 "base_bdevs_list": [ 00:17:11.019 { 00:17:11.019 "name": "BaseBdev1", 00:17:11.019 "uuid": "75b4678f-8ef5-43ae-ab9e-840a47b7d2a6", 00:17:11.019 "is_configured": true, 00:17:11.019 "data_offset": 2048, 00:17:11.019 "data_size": 63488 00:17:11.019 }, 00:17:11.019 { 00:17:11.019 "name": "BaseBdev2", 00:17:11.019 "uuid": "3744bd0b-88f0-4d82-aa6d-264f761b6d3a", 00:17:11.019 "is_configured": true, 00:17:11.019 "data_offset": 2048, 00:17:11.019 "data_size": 63488 00:17:11.019 } 00:17:11.019 ] 00:17:11.019 }' 00:17:11.019 13:01:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.019 13:01:14 -- common/autotest_common.sh@10 -- # set +x 00:17:11.586 13:01:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:11.845 [2024-04-17 13:01:15.811092] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.845 [2024-04-17 13:01:15.811389] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.845 [2024-04-17 13:01:15.811599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.845 13:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.104 13:01:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.104 "name": "Existed_Raid", 00:17:12.104 "uuid": "a98c1967-bf36-42b2-87fc-70b48252a6a3", 00:17:12.104 "strip_size_kb": 64, 00:17:12.104 "state": "offline", 00:17:12.104 "raid_level": "concat", 00:17:12.104 "superblock": true, 00:17:12.104 "num_base_bdevs": 2, 00:17:12.104 "num_base_bdevs_discovered": 1, 00:17:12.104 "num_base_bdevs_operational": 1, 00:17:12.104 "base_bdevs_list": [ 00:17:12.104 { 00:17:12.104 "name": null, 00:17:12.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.104 "is_configured": false, 00:17:12.104 "data_offset": 2048, 00:17:12.104 "data_size": 63488 00:17:12.104 }, 00:17:12.104 { 00:17:12.104 "name": "BaseBdev2", 00:17:12.104 "uuid": "3744bd0b-88f0-4d82-aa6d-264f761b6d3a", 00:17:12.104 "is_configured": true, 00:17:12.104 "data_offset": 2048, 00:17:12.104 "data_size": 63488 00:17:12.104 } 00:17:12.104 ] 00:17:12.104 }' 00:17:12.104 13:01:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.104 13:01:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.045 13:01:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:13.045 13:01:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.045 13:01:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.045 13:01:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:13.045 13:01:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:13.045 13:01:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.045 13:01:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:13.304 [2024-04-17 13:01:17.359330] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:13.304 [2024-04-17 13:01:17.359635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:13.562 13:01:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:13.562 13:01:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.562 13:01:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.562 13:01:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:13.821 13:01:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:13.821 13:01:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:13.821 13:01:17 -- bdev/bdev_raid.sh@287 -- # killprocess 120476 00:17:13.821 13:01:17 -- common/autotest_common.sh@924 -- # '[' -z 120476 ']' 00:17:13.821 13:01:17 -- common/autotest_common.sh@928 -- # kill -0 120476 00:17:13.821 13:01:17 -- common/autotest_common.sh@929 -- # uname 00:17:13.821 13:01:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:13.821 13:01:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120476 00:17:13.821 killing process with pid 120476 00:17:13.821 13:01:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:13.821 13:01:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:13.822 13:01:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120476' 00:17:13.822 13:01:17 -- common/autotest_common.sh@943 -- # kill 120476 00:17:13.822 13:01:17 -- common/autotest_common.sh@948 -- # wait 120476 00:17:13.822 [2024-04-17 13:01:17.732731] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.822 [2024-04-17 13:01:17.732845] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.757 ************************************ 00:17:14.757 END TEST raid_state_function_test_sb 00:17:14.757 ************************************ 00:17:14.757 13:01:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:14.757 00:17:14.757 real 0m12.282s 00:17:14.757 user 0m21.623s 00:17:14.757 sys 0m1.341s 00:17:14.757 13:01:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:14.757 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:17:14.757 13:01:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:14.757 13:01:18 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:14.757 13:01:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:14.757 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.016 ************************************ 00:17:15.016 START TEST raid_superblock_test 00:17:15.016 ************************************ 00:17:15.016 13:01:18 -- common/autotest_common.sh@1099 -- # raid_superblock_test concat 2 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=120839 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:15.016 13:01:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120839 /var/tmp/spdk-raid.sock 00:17:15.016 13:01:18 -- common/autotest_common.sh@817 -- # '[' -z 120839 ']' 00:17:15.016 13:01:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:15.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:15.016 13:01:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.016 13:01:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:15.016 13:01:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.016 13:01:18 -- common/autotest_common.sh@10 -- # set +x 00:17:15.016 [2024-04-17 13:01:18.978008] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:15.016 [2024-04-17 13:01:18.978212] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120839 ] 00:17:15.016 [2024-04-17 13:01:19.145681] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.275 [2024-04-17 13:01:19.382962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.534 [2024-04-17 13:01:19.582858] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.101 13:01:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.101 13:01:19 -- common/autotest_common.sh@850 -- # return 0 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.101 13:01:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:16.360 malloc1 00:17:16.360 13:01:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.619 [2024-04-17 13:01:20.513150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.619 [2024-04-17 13:01:20.513246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.619 [2024-04-17 13:01:20.513282] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:16.619 [2024-04-17 13:01:20.513333] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.619 [2024-04-17 13:01:20.515958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.619 [2024-04-17 13:01:20.516012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.619 pt1 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.619 13:01:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:16.877 malloc2 00:17:16.877 13:01:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.877 [2024-04-17 13:01:21.015628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.877 [2024-04-17 13:01:21.015747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.877 [2024-04-17 13:01:21.015792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:16.877 [2024-04-17 13:01:21.015865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.877 [2024-04-17 13:01:21.018189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.877 [2024-04-17 13:01:21.018239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.877 pt2 00:17:17.136 13:01:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:17.136 13:01:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:17.136 13:01:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:17.395 [2024-04-17 13:01:21.347816] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.395 [2024-04-17 13:01:21.350068] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.395 [2024-04-17 13:01:21.350305] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:17.395 [2024-04-17 13:01:21.350321] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:17.395 [2024-04-17 13:01:21.350498] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:17.395 [2024-04-17 13:01:21.350899] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:17.395 [2024-04-17 13:01:21.350925] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:17.395 [2024-04-17 13:01:21.351082] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.395 13:01:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.654 13:01:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.654 "name": "raid_bdev1", 00:17:17.654 "uuid": "edf90c4d-e4eb-47fc-af8f-31dd5471c8da", 00:17:17.654 "strip_size_kb": 64, 00:17:17.654 "state": "online", 00:17:17.654 "raid_level": "concat", 00:17:17.654 "superblock": true, 00:17:17.654 "num_base_bdevs": 2, 00:17:17.654 "num_base_bdevs_discovered": 2, 00:17:17.654 "num_base_bdevs_operational": 2, 00:17:17.654 "base_bdevs_list": [ 00:17:17.654 { 00:17:17.654 "name": "pt1", 00:17:17.654 "uuid": "6f8ac656-9b62-55aa-85db-26d475fb6db7", 00:17:17.654 "is_configured": true, 00:17:17.654 "data_offset": 2048, 00:17:17.654 "data_size": 63488 00:17:17.654 }, 00:17:17.654 { 00:17:17.654 "name": "pt2", 00:17:17.654 "uuid": "a424f662-d398-5f81-8048-04c6d083fe52", 00:17:17.654 "is_configured": true, 00:17:17.654 "data_offset": 2048, 00:17:17.654 "data_size": 63488 00:17:17.654 } 00:17:17.654 ] 00:17:17.654 }' 00:17:17.654 13:01:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.654 13:01:21 -- common/autotest_common.sh@10 -- # set +x 00:17:18.219 13:01:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:18.219 13:01:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:18.477 [2024-04-17 13:01:22.608346] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.736 13:01:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=edf90c4d-e4eb-47fc-af8f-31dd5471c8da 00:17:18.736 13:01:22 -- bdev/bdev_raid.sh@380 -- # '[' -z edf90c4d-e4eb-47fc-af8f-31dd5471c8da ']' 00:17:18.736 13:01:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:18.736 [2024-04-17 13:01:22.844111] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.736 [2024-04-17 13:01:22.844161] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.736 [2024-04-17 13:01:22.844255] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.736 [2024-04-17 13:01:22.844321] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.736 [2024-04-17 13:01:22.844335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:18.736 13:01:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:18.736 13:01:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.995 13:01:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:18.995 13:01:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:18.995 13:01:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:18.995 13:01:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:19.253 13:01:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.253 13:01:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:19.522 13:01:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:19.522 13:01:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.794 13:01:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:19.794 13:01:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:19.794 13:01:23 -- common/autotest_common.sh@638 -- # local es=0 00:17:19.794 13:01:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:19.794 13:01:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.794 13:01:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:19.794 13:01:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.794 13:01:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:19.794 13:01:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.794 13:01:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:19.794 13:01:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.794 13:01:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:19.794 13:01:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:20.053 [2024-04-17 13:01:24.104834] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:20.053 [2024-04-17 13:01:24.107049] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:20.053 [2024-04-17 13:01:24.107155] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:20.053 [2024-04-17 13:01:24.107256] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:20.053 [2024-04-17 13:01:24.107298] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.053 [2024-04-17 13:01:24.107311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:17:20.053 request: 00:17:20.053 { 00:17:20.053 "name": "raid_bdev1", 00:17:20.053 "raid_level": "concat", 00:17:20.053 "base_bdevs": [ 00:17:20.053 "malloc1", 00:17:20.053 "malloc2" 00:17:20.053 ], 00:17:20.053 "superblock": false, 00:17:20.053 "strip_size_kb": 64, 00:17:20.053 "method": "bdev_raid_create", 00:17:20.053 "req_id": 1 00:17:20.053 } 00:17:20.053 Got JSON-RPC error response 00:17:20.053 response: 00:17:20.053 { 00:17:20.053 "code": -17, 00:17:20.053 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:20.053 } 00:17:20.053 13:01:24 -- common/autotest_common.sh@641 -- # es=1 00:17:20.053 13:01:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:20.053 13:01:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:20.053 13:01:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:20.053 13:01:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.053 13:01:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:20.312 13:01:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:20.312 13:01:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:20.312 13:01:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.880 [2024-04-17 13:01:24.716914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.880 [2024-04-17 13:01:24.717038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.880 [2024-04-17 13:01:24.717078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:20.880 [2024-04-17 13:01:24.717106] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.880 [2024-04-17 13:01:24.719663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.880 [2024-04-17 13:01:24.719725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.880 [2024-04-17 13:01:24.719847] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:20.880 [2024-04-17 13:01:24.719927] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.880 pt1 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.880 "name": "raid_bdev1", 00:17:20.880 "uuid": "edf90c4d-e4eb-47fc-af8f-31dd5471c8da", 00:17:20.880 "strip_size_kb": 64, 00:17:20.880 "state": "configuring", 00:17:20.880 "raid_level": "concat", 00:17:20.880 "superblock": true, 00:17:20.880 "num_base_bdevs": 2, 00:17:20.880 "num_base_bdevs_discovered": 1, 00:17:20.880 "num_base_bdevs_operational": 2, 00:17:20.880 "base_bdevs_list": [ 00:17:20.880 { 00:17:20.880 "name": "pt1", 00:17:20.880 "uuid": "6f8ac656-9b62-55aa-85db-26d475fb6db7", 00:17:20.880 "is_configured": true, 00:17:20.880 "data_offset": 2048, 00:17:20.880 "data_size": 63488 00:17:20.880 }, 00:17:20.880 { 00:17:20.880 "name": null, 00:17:20.880 "uuid": "a424f662-d398-5f81-8048-04c6d083fe52", 00:17:20.880 "is_configured": false, 00:17:20.880 "data_offset": 2048, 00:17:20.880 "data_size": 63488 00:17:20.880 } 00:17:20.880 ] 00:17:20.880 }' 00:17:20.880 13:01:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.880 13:01:24 -- common/autotest_common.sh@10 -- # set +x 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.814 [2024-04-17 13:01:25.937286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.814 [2024-04-17 13:01:25.937414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.814 [2024-04-17 13:01:25.937462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:21.814 [2024-04-17 13:01:25.937490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.814 [2024-04-17 13:01:25.937995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.814 [2024-04-17 13:01:25.938047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.814 [2024-04-17 13:01:25.938151] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:21.814 [2024-04-17 13:01:25.938180] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.814 [2024-04-17 13:01:25.938304] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:21.814 [2024-04-17 13:01:25.938319] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:21.814 [2024-04-17 13:01:25.938452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:21.814 [2024-04-17 13:01:25.938834] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:21.814 [2024-04-17 13:01:25.938861] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:21.814 [2024-04-17 13:01:25.939001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.814 pt2 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.814 13:01:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.381 13:01:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.381 "name": "raid_bdev1", 00:17:22.381 "uuid": "edf90c4d-e4eb-47fc-af8f-31dd5471c8da", 00:17:22.381 "strip_size_kb": 64, 00:17:22.381 "state": "online", 00:17:22.381 "raid_level": "concat", 00:17:22.381 "superblock": true, 00:17:22.381 "num_base_bdevs": 2, 00:17:22.381 "num_base_bdevs_discovered": 2, 00:17:22.381 "num_base_bdevs_operational": 2, 00:17:22.381 "base_bdevs_list": [ 00:17:22.381 { 00:17:22.381 "name": "pt1", 00:17:22.381 "uuid": "6f8ac656-9b62-55aa-85db-26d475fb6db7", 00:17:22.381 "is_configured": true, 00:17:22.381 "data_offset": 2048, 00:17:22.381 "data_size": 63488 00:17:22.381 }, 00:17:22.381 { 00:17:22.381 "name": "pt2", 00:17:22.381 "uuid": "a424f662-d398-5f81-8048-04c6d083fe52", 00:17:22.381 "is_configured": true, 00:17:22.381 "data_offset": 2048, 00:17:22.381 "data_size": 63488 00:17:22.381 } 00:17:22.381 ] 00:17:22.381 }' 00:17:22.381 13:01:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.381 13:01:26 -- common/autotest_common.sh@10 -- # set +x 00:17:22.972 13:01:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:22.972 13:01:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.231 [2024-04-17 13:01:27.165788] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.231 13:01:27 -- bdev/bdev_raid.sh@430 -- # '[' edf90c4d-e4eb-47fc-af8f-31dd5471c8da '!=' edf90c4d-e4eb-47fc-af8f-31dd5471c8da ']' 00:17:23.231 13:01:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:23.231 13:01:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:23.231 13:01:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:23.231 13:01:27 -- bdev/bdev_raid.sh@511 -- # killprocess 120839 00:17:23.231 13:01:27 -- common/autotest_common.sh@924 -- # '[' -z 120839 ']' 00:17:23.231 13:01:27 -- common/autotest_common.sh@928 -- # kill -0 120839 00:17:23.231 13:01:27 -- common/autotest_common.sh@929 -- # uname 00:17:23.231 13:01:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:23.231 13:01:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120839 00:17:23.231 killing process with pid 120839 00:17:23.231 13:01:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:23.231 13:01:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:23.231 13:01:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120839' 00:17:23.231 13:01:27 -- common/autotest_common.sh@943 -- # kill 120839 00:17:23.231 13:01:27 -- common/autotest_common.sh@948 -- # wait 120839 00:17:23.231 [2024-04-17 13:01:27.204280] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.231 [2024-04-17 13:01:27.204368] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.231 [2024-04-17 13:01:27.204423] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.231 [2024-04-17 13:01:27.204434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:23.231 [2024-04-17 13:01:27.366314] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.604 ************************************ 00:17:24.604 END TEST raid_superblock_test 00:17:24.604 ************************************ 00:17:24.604 13:01:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:24.604 00:17:24.604 real 0m9.565s 00:17:24.604 user 0m16.491s 00:17:24.604 sys 0m1.121s 00:17:24.604 13:01:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:24.604 13:01:28 -- common/autotest_common.sh@10 -- # set +x 00:17:24.604 13:01:28 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:24.604 13:01:28 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:24.604 13:01:28 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:24.604 13:01:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:24.604 13:01:28 -- common/autotest_common.sh@10 -- # set +x 00:17:24.604 ************************************ 00:17:24.604 START TEST raid_state_function_test 00:17:24.604 ************************************ 00:17:24.604 13:01:28 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 2 false 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=121116 00:17:24.605 Process raid pid: 121116 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121116' 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121116 /var/tmp/spdk-raid.sock 00:17:24.605 13:01:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:24.605 13:01:28 -- common/autotest_common.sh@817 -- # '[' -z 121116 ']' 00:17:24.605 13:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.605 13:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.605 13:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.605 13:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.605 13:01:28 -- common/autotest_common.sh@10 -- # set +x 00:17:24.605 [2024-04-17 13:01:28.627929] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:24.605 [2024-04-17 13:01:28.628253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.863 [2024-04-17 13:01:28.800627] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.122 [2024-04-17 13:01:29.044155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.122 [2024-04-17 13:01:29.245953] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.688 13:01:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.688 13:01:29 -- common/autotest_common.sh@850 -- # return 0 00:17:25.688 13:01:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:25.688 [2024-04-17 13:01:29.819644] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:25.688 [2024-04-17 13:01:29.819750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:25.688 [2024-04-17 13:01:29.819766] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.688 [2024-04-17 13:01:29.819785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.946 13:01:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.204 13:01:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.204 "name": "Existed_Raid", 00:17:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.204 "strip_size_kb": 0, 00:17:26.204 "state": "configuring", 00:17:26.204 "raid_level": "raid1", 00:17:26.204 "superblock": false, 00:17:26.204 "num_base_bdevs": 2, 00:17:26.204 "num_base_bdevs_discovered": 0, 00:17:26.204 "num_base_bdevs_operational": 2, 00:17:26.204 "base_bdevs_list": [ 00:17:26.204 { 00:17:26.204 "name": "BaseBdev1", 00:17:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.204 "is_configured": false, 00:17:26.204 "data_offset": 0, 00:17:26.204 "data_size": 0 00:17:26.204 }, 00:17:26.204 { 00:17:26.204 "name": "BaseBdev2", 00:17:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.204 "is_configured": false, 00:17:26.204 "data_offset": 0, 00:17:26.204 "data_size": 0 00:17:26.204 } 00:17:26.204 ] 00:17:26.204 }' 00:17:26.204 13:01:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.204 13:01:30 -- common/autotest_common.sh@10 -- # set +x 00:17:26.771 13:01:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:27.029 [2024-04-17 13:01:31.035813] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.029 [2024-04-17 13:01:31.035896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:27.029 13:01:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:27.289 [2024-04-17 13:01:31.339926] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.289 [2024-04-17 13:01:31.340059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.289 [2024-04-17 13:01:31.340081] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.289 [2024-04-17 13:01:31.340106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.289 13:01:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.547 [2024-04-17 13:01:31.608196] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.548 BaseBdev1 00:17:27.548 13:01:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:27.548 13:01:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:27.548 13:01:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:27.548 13:01:31 -- common/autotest_common.sh@887 -- # local i 00:17:27.548 13:01:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:27.548 13:01:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:27.548 13:01:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.806 13:01:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.065 [ 00:17:28.065 { 00:17:28.065 "name": "BaseBdev1", 00:17:28.065 "aliases": [ 00:17:28.065 "2593ced4-bb21-455f-bbea-fa9f2829de30" 00:17:28.065 ], 00:17:28.065 "product_name": "Malloc disk", 00:17:28.065 "block_size": 512, 00:17:28.065 "num_blocks": 65536, 00:17:28.065 "uuid": "2593ced4-bb21-455f-bbea-fa9f2829de30", 00:17:28.065 "assigned_rate_limits": { 00:17:28.065 "rw_ios_per_sec": 0, 00:17:28.065 "rw_mbytes_per_sec": 0, 00:17:28.065 "r_mbytes_per_sec": 0, 00:17:28.065 "w_mbytes_per_sec": 0 00:17:28.065 }, 00:17:28.065 "claimed": true, 00:17:28.065 "claim_type": "exclusive_write", 00:17:28.065 "zoned": false, 00:17:28.065 "supported_io_types": { 00:17:28.065 "read": true, 00:17:28.065 "write": true, 00:17:28.065 "unmap": true, 00:17:28.065 "write_zeroes": true, 00:17:28.065 "flush": true, 00:17:28.065 "reset": true, 00:17:28.065 "compare": false, 00:17:28.065 "compare_and_write": false, 00:17:28.065 "abort": true, 00:17:28.065 "nvme_admin": false, 00:17:28.065 "nvme_io": false 00:17:28.065 }, 00:17:28.065 "memory_domains": [ 00:17:28.065 { 00:17:28.065 "dma_device_id": "system", 00:17:28.065 "dma_device_type": 1 00:17:28.065 }, 00:17:28.065 { 00:17:28.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.065 "dma_device_type": 2 00:17:28.065 } 00:17:28.065 ], 00:17:28.065 "driver_specific": {} 00:17:28.065 } 00:17:28.065 ] 00:17:28.065 13:01:32 -- common/autotest_common.sh@893 -- # return 0 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.065 13:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.324 13:01:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.324 "name": "Existed_Raid", 00:17:28.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.324 "strip_size_kb": 0, 00:17:28.324 "state": "configuring", 00:17:28.324 "raid_level": "raid1", 00:17:28.324 "superblock": false, 00:17:28.324 "num_base_bdevs": 2, 00:17:28.324 "num_base_bdevs_discovered": 1, 00:17:28.324 "num_base_bdevs_operational": 2, 00:17:28.324 "base_bdevs_list": [ 00:17:28.324 { 00:17:28.324 "name": "BaseBdev1", 00:17:28.324 "uuid": "2593ced4-bb21-455f-bbea-fa9f2829de30", 00:17:28.324 "is_configured": true, 00:17:28.324 "data_offset": 0, 00:17:28.324 "data_size": 65536 00:17:28.324 }, 00:17:28.324 { 00:17:28.324 "name": "BaseBdev2", 00:17:28.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.324 "is_configured": false, 00:17:28.324 "data_offset": 0, 00:17:28.324 "data_size": 0 00:17:28.324 } 00:17:28.324 ] 00:17:28.324 }' 00:17:28.324 13:01:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.324 13:01:32 -- common/autotest_common.sh@10 -- # set +x 00:17:28.897 13:01:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.157 [2024-04-17 13:01:33.268709] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.157 [2024-04-17 13:01:33.268788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:29.157 13:01:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:29.157 13:01:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:29.416 [2024-04-17 13:01:33.492801] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.416 [2024-04-17 13:01:33.494995] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.416 [2024-04-17 13:01:33.495063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.416 13:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.676 13:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.676 "name": "Existed_Raid", 00:17:29.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.676 "strip_size_kb": 0, 00:17:29.676 "state": "configuring", 00:17:29.676 "raid_level": "raid1", 00:17:29.676 "superblock": false, 00:17:29.676 "num_base_bdevs": 2, 00:17:29.676 "num_base_bdevs_discovered": 1, 00:17:29.676 "num_base_bdevs_operational": 2, 00:17:29.676 "base_bdevs_list": [ 00:17:29.676 { 00:17:29.676 "name": "BaseBdev1", 00:17:29.676 "uuid": "2593ced4-bb21-455f-bbea-fa9f2829de30", 00:17:29.676 "is_configured": true, 00:17:29.676 "data_offset": 0, 00:17:29.676 "data_size": 65536 00:17:29.676 }, 00:17:29.676 { 00:17:29.676 "name": "BaseBdev2", 00:17:29.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.676 "is_configured": false, 00:17:29.676 "data_offset": 0, 00:17:29.676 "data_size": 0 00:17:29.676 } 00:17:29.676 ] 00:17:29.676 }' 00:17:29.676 13:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.676 13:01:33 -- common/autotest_common.sh@10 -- # set +x 00:17:30.615 13:01:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:30.615 [2024-04-17 13:01:34.724799] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.615 [2024-04-17 13:01:34.724868] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:30.615 [2024-04-17 13:01:34.724880] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:30.615 [2024-04-17 13:01:34.725040] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:17:30.615 [2024-04-17 13:01:34.725408] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:30.615 [2024-04-17 13:01:34.725433] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:30.615 [2024-04-17 13:01:34.725710] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.615 BaseBdev2 00:17:30.615 13:01:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:30.615 13:01:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:30.615 13:01:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:30.615 13:01:34 -- common/autotest_common.sh@887 -- # local i 00:17:30.615 13:01:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:30.615 13:01:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:30.615 13:01:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.874 13:01:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.133 [ 00:17:31.133 { 00:17:31.133 "name": "BaseBdev2", 00:17:31.134 "aliases": [ 00:17:31.134 "dc4ff6fc-9924-4bd8-ad40-a55ae36aac53" 00:17:31.134 ], 00:17:31.134 "product_name": "Malloc disk", 00:17:31.134 "block_size": 512, 00:17:31.134 "num_blocks": 65536, 00:17:31.134 "uuid": "dc4ff6fc-9924-4bd8-ad40-a55ae36aac53", 00:17:31.134 "assigned_rate_limits": { 00:17:31.134 "rw_ios_per_sec": 0, 00:17:31.134 "rw_mbytes_per_sec": 0, 00:17:31.134 "r_mbytes_per_sec": 0, 00:17:31.134 "w_mbytes_per_sec": 0 00:17:31.134 }, 00:17:31.134 "claimed": true, 00:17:31.134 "claim_type": "exclusive_write", 00:17:31.134 "zoned": false, 00:17:31.134 "supported_io_types": { 00:17:31.134 "read": true, 00:17:31.134 "write": true, 00:17:31.134 "unmap": true, 00:17:31.134 "write_zeroes": true, 00:17:31.134 "flush": true, 00:17:31.134 "reset": true, 00:17:31.134 "compare": false, 00:17:31.134 "compare_and_write": false, 00:17:31.134 "abort": true, 00:17:31.134 "nvme_admin": false, 00:17:31.134 "nvme_io": false 00:17:31.134 }, 00:17:31.134 "memory_domains": [ 00:17:31.134 { 00:17:31.134 "dma_device_id": "system", 00:17:31.134 "dma_device_type": 1 00:17:31.134 }, 00:17:31.134 { 00:17:31.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.134 "dma_device_type": 2 00:17:31.134 } 00:17:31.134 ], 00:17:31.134 "driver_specific": {} 00:17:31.134 } 00:17:31.134 ] 00:17:31.134 13:01:35 -- common/autotest_common.sh@893 -- # return 0 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.134 13:01:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.393 13:01:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.393 "name": "Existed_Raid", 00:17:31.393 "uuid": "1fb95275-c5b0-4b34-94df-7aef13b1a577", 00:17:31.393 "strip_size_kb": 0, 00:17:31.393 "state": "online", 00:17:31.393 "raid_level": "raid1", 00:17:31.393 "superblock": false, 00:17:31.393 "num_base_bdevs": 2, 00:17:31.393 "num_base_bdevs_discovered": 2, 00:17:31.393 "num_base_bdevs_operational": 2, 00:17:31.393 "base_bdevs_list": [ 00:17:31.393 { 00:17:31.393 "name": "BaseBdev1", 00:17:31.393 "uuid": "2593ced4-bb21-455f-bbea-fa9f2829de30", 00:17:31.393 "is_configured": true, 00:17:31.393 "data_offset": 0, 00:17:31.393 "data_size": 65536 00:17:31.393 }, 00:17:31.393 { 00:17:31.393 "name": "BaseBdev2", 00:17:31.393 "uuid": "dc4ff6fc-9924-4bd8-ad40-a55ae36aac53", 00:17:31.393 "is_configured": true, 00:17:31.393 "data_offset": 0, 00:17:31.393 "data_size": 65536 00:17:31.393 } 00:17:31.393 ] 00:17:31.393 }' 00:17:31.393 13:01:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.393 13:01:35 -- common/autotest_common.sh@10 -- # set +x 00:17:32.328 13:01:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.328 [2024-04-17 13:01:36.449318] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.586 13:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.587 13:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.846 13:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.846 "name": "Existed_Raid", 00:17:32.846 "uuid": "1fb95275-c5b0-4b34-94df-7aef13b1a577", 00:17:32.846 "strip_size_kb": 0, 00:17:32.846 "state": "online", 00:17:32.846 "raid_level": "raid1", 00:17:32.846 "superblock": false, 00:17:32.846 "num_base_bdevs": 2, 00:17:32.846 "num_base_bdevs_discovered": 1, 00:17:32.846 "num_base_bdevs_operational": 1, 00:17:32.846 "base_bdevs_list": [ 00:17:32.846 { 00:17:32.846 "name": null, 00:17:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.846 "is_configured": false, 00:17:32.846 "data_offset": 0, 00:17:32.846 "data_size": 65536 00:17:32.846 }, 00:17:32.846 { 00:17:32.846 "name": "BaseBdev2", 00:17:32.846 "uuid": "dc4ff6fc-9924-4bd8-ad40-a55ae36aac53", 00:17:32.846 "is_configured": true, 00:17:32.846 "data_offset": 0, 00:17:32.846 "data_size": 65536 00:17:32.846 } 00:17:32.846 ] 00:17:32.846 }' 00:17:32.846 13:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.846 13:01:36 -- common/autotest_common.sh@10 -- # set +x 00:17:33.412 13:01:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:33.412 13:01:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.412 13:01:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.412 13:01:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.670 13:01:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.670 13:01:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.670 13:01:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.928 [2024-04-17 13:01:37.980251] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.928 [2024-04-17 13:01:37.980566] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.928 [2024-04-17 13:01:38.063236] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.928 [2024-04-17 13:01:38.063566] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.928 [2024-04-17 13:01:38.063674] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:34.187 13:01:38 -- bdev/bdev_raid.sh@287 -- # killprocess 121116 00:17:34.187 13:01:38 -- common/autotest_common.sh@924 -- # '[' -z 121116 ']' 00:17:34.187 13:01:38 -- common/autotest_common.sh@928 -- # kill -0 121116 00:17:34.445 13:01:38 -- common/autotest_common.sh@929 -- # uname 00:17:34.445 13:01:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:34.445 13:01:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121116 00:17:34.445 killing process with pid 121116 00:17:34.445 13:01:38 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:34.445 13:01:38 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:34.445 13:01:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121116' 00:17:34.445 13:01:38 -- common/autotest_common.sh@943 -- # kill 121116 00:17:34.445 13:01:38 -- common/autotest_common.sh@948 -- # wait 121116 00:17:34.445 [2024-04-17 13:01:38.350738] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.445 [2024-04-17 13:01:38.350878] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.378 ************************************ 00:17:35.378 END TEST raid_state_function_test 00:17:35.378 ************************************ 00:17:35.378 13:01:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:35.378 00:17:35.378 real 0m10.905s 00:17:35.378 user 0m19.145s 00:17:35.378 sys 0m1.220s 00:17:35.378 13:01:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:35.378 13:01:39 -- common/autotest_common.sh@10 -- # set +x 00:17:35.378 13:01:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:17:35.378 13:01:39 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:35.378 13:01:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:35.378 13:01:39 -- common/autotest_common.sh@10 -- # set +x 00:17:35.636 ************************************ 00:17:35.636 START TEST raid_state_function_test_sb 00:17:35.636 ************************************ 00:17:35.636 13:01:39 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 2 true 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=121460 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121460' 00:17:35.636 Process raid pid: 121460 00:17:35.636 13:01:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121460 /var/tmp/spdk-raid.sock 00:17:35.636 13:01:39 -- common/autotest_common.sh@817 -- # '[' -z 121460 ']' 00:17:35.636 13:01:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.636 13:01:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.636 13:01:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.636 13:01:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.636 13:01:39 -- common/autotest_common.sh@10 -- # set +x 00:17:35.636 [2024-04-17 13:01:39.605704] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:35.636 [2024-04-17 13:01:39.605861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.636 [2024-04-17 13:01:39.768140] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.894 [2024-04-17 13:01:40.005897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.153 [2024-04-17 13:01:40.205489] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.719 13:01:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.719 13:01:40 -- common/autotest_common.sh@850 -- # return 0 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:36.719 [2024-04-17 13:01:40.822444] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.719 [2024-04-17 13:01:40.822522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.719 [2024-04-17 13:01:40.822542] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.719 [2024-04-17 13:01:40.822562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.719 13:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.979 13:01:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.979 "name": "Existed_Raid", 00:17:36.979 "uuid": "72a7336d-555c-4b88-b1f2-45e0f679ce36", 00:17:36.979 "strip_size_kb": 0, 00:17:36.979 "state": "configuring", 00:17:36.979 "raid_level": "raid1", 00:17:36.979 "superblock": true, 00:17:36.979 "num_base_bdevs": 2, 00:17:36.979 "num_base_bdevs_discovered": 0, 00:17:36.979 "num_base_bdevs_operational": 2, 00:17:36.979 "base_bdevs_list": [ 00:17:36.979 { 00:17:36.979 "name": "BaseBdev1", 00:17:36.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.979 "is_configured": false, 00:17:36.979 "data_offset": 0, 00:17:36.979 "data_size": 0 00:17:36.979 }, 00:17:36.979 { 00:17:36.979 "name": "BaseBdev2", 00:17:36.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.979 "is_configured": false, 00:17:36.979 "data_offset": 0, 00:17:36.979 "data_size": 0 00:17:36.979 } 00:17:36.979 ] 00:17:36.979 }' 00:17:36.979 13:01:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.979 13:01:41 -- common/autotest_common.sh@10 -- # set +x 00:17:37.919 13:01:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:37.919 [2024-04-17 13:01:41.986545] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.919 [2024-04-17 13:01:41.986598] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:37.919 13:01:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:38.178 [2024-04-17 13:01:42.246663] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.178 [2024-04-17 13:01:42.246791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.178 [2024-04-17 13:01:42.246817] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.178 [2024-04-17 13:01:42.246846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.178 13:01:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:38.437 [2024-04-17 13:01:42.510965] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.437 BaseBdev1 00:17:38.437 13:01:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:38.437 13:01:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:38.437 13:01:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:38.437 13:01:42 -- common/autotest_common.sh@887 -- # local i 00:17:38.437 13:01:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:38.437 13:01:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:38.437 13:01:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.696 13:01:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.954 [ 00:17:38.954 { 00:17:38.954 "name": "BaseBdev1", 00:17:38.954 "aliases": [ 00:17:38.954 "fef99a00-2465-401e-8fce-fb42db148e7f" 00:17:38.954 ], 00:17:38.954 "product_name": "Malloc disk", 00:17:38.954 "block_size": 512, 00:17:38.954 "num_blocks": 65536, 00:17:38.954 "uuid": "fef99a00-2465-401e-8fce-fb42db148e7f", 00:17:38.954 "assigned_rate_limits": { 00:17:38.954 "rw_ios_per_sec": 0, 00:17:38.954 "rw_mbytes_per_sec": 0, 00:17:38.954 "r_mbytes_per_sec": 0, 00:17:38.954 "w_mbytes_per_sec": 0 00:17:38.954 }, 00:17:38.954 "claimed": true, 00:17:38.954 "claim_type": "exclusive_write", 00:17:38.954 "zoned": false, 00:17:38.954 "supported_io_types": { 00:17:38.954 "read": true, 00:17:38.954 "write": true, 00:17:38.954 "unmap": true, 00:17:38.954 "write_zeroes": true, 00:17:38.954 "flush": true, 00:17:38.954 "reset": true, 00:17:38.954 "compare": false, 00:17:38.954 "compare_and_write": false, 00:17:38.954 "abort": true, 00:17:38.954 "nvme_admin": false, 00:17:38.954 "nvme_io": false 00:17:38.954 }, 00:17:38.954 "memory_domains": [ 00:17:38.954 { 00:17:38.954 "dma_device_id": "system", 00:17:38.954 "dma_device_type": 1 00:17:38.954 }, 00:17:38.954 { 00:17:38.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.954 "dma_device_type": 2 00:17:38.954 } 00:17:38.954 ], 00:17:38.954 "driver_specific": {} 00:17:38.954 } 00:17:38.954 ] 00:17:38.954 13:01:42 -- common/autotest_common.sh@893 -- # return 0 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.954 13:01:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.954 13:01:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.213 13:01:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.213 "name": "Existed_Raid", 00:17:39.213 "uuid": "5a149b24-7073-40a4-bb32-9226fc384946", 00:17:39.213 "strip_size_kb": 0, 00:17:39.213 "state": "configuring", 00:17:39.213 "raid_level": "raid1", 00:17:39.213 "superblock": true, 00:17:39.213 "num_base_bdevs": 2, 00:17:39.213 "num_base_bdevs_discovered": 1, 00:17:39.213 "num_base_bdevs_operational": 2, 00:17:39.213 "base_bdevs_list": [ 00:17:39.213 { 00:17:39.213 "name": "BaseBdev1", 00:17:39.213 "uuid": "fef99a00-2465-401e-8fce-fb42db148e7f", 00:17:39.213 "is_configured": true, 00:17:39.213 "data_offset": 2048, 00:17:39.213 "data_size": 63488 00:17:39.213 }, 00:17:39.213 { 00:17:39.213 "name": "BaseBdev2", 00:17:39.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.213 "is_configured": false, 00:17:39.213 "data_offset": 0, 00:17:39.213 "data_size": 0 00:17:39.213 } 00:17:39.213 ] 00:17:39.213 }' 00:17:39.213 13:01:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.213 13:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:39.779 13:01:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.037 [2024-04-17 13:01:44.059405] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.037 [2024-04-17 13:01:44.059479] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:40.037 13:01:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:40.037 13:01:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:40.296 13:01:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.554 BaseBdev1 00:17:40.554 13:01:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:40.554 13:01:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:40.554 13:01:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:40.554 13:01:44 -- common/autotest_common.sh@887 -- # local i 00:17:40.554 13:01:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:40.554 13:01:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:40.554 13:01:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.811 13:01:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.069 [ 00:17:41.069 { 00:17:41.069 "name": "BaseBdev1", 00:17:41.069 "aliases": [ 00:17:41.069 "4cbec772-768e-4550-933f-3a2dffcee622" 00:17:41.069 ], 00:17:41.069 "product_name": "Malloc disk", 00:17:41.069 "block_size": 512, 00:17:41.069 "num_blocks": 65536, 00:17:41.069 "uuid": "4cbec772-768e-4550-933f-3a2dffcee622", 00:17:41.069 "assigned_rate_limits": { 00:17:41.069 "rw_ios_per_sec": 0, 00:17:41.069 "rw_mbytes_per_sec": 0, 00:17:41.069 "r_mbytes_per_sec": 0, 00:17:41.069 "w_mbytes_per_sec": 0 00:17:41.069 }, 00:17:41.069 "claimed": false, 00:17:41.069 "zoned": false, 00:17:41.069 "supported_io_types": { 00:17:41.069 "read": true, 00:17:41.069 "write": true, 00:17:41.069 "unmap": true, 00:17:41.069 "write_zeroes": true, 00:17:41.069 "flush": true, 00:17:41.069 "reset": true, 00:17:41.069 "compare": false, 00:17:41.069 "compare_and_write": false, 00:17:41.069 "abort": true, 00:17:41.069 "nvme_admin": false, 00:17:41.069 "nvme_io": false 00:17:41.069 }, 00:17:41.069 "memory_domains": [ 00:17:41.070 { 00:17:41.070 "dma_device_id": "system", 00:17:41.070 "dma_device_type": 1 00:17:41.070 }, 00:17:41.070 { 00:17:41.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.070 "dma_device_type": 2 00:17:41.070 } 00:17:41.070 ], 00:17:41.070 "driver_specific": {} 00:17:41.070 } 00:17:41.070 ] 00:17:41.070 13:01:45 -- common/autotest_common.sh@893 -- # return 0 00:17:41.070 13:01:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:41.328 [2024-04-17 13:01:45.349684] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.328 [2024-04-17 13:01:45.351930] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.328 [2024-04-17 13:01:45.352005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.328 13:01:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.586 13:01:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.586 "name": "Existed_Raid", 00:17:41.586 "uuid": "3bab133a-b4ab-47d4-9333-b33324cd626b", 00:17:41.586 "strip_size_kb": 0, 00:17:41.586 "state": "configuring", 00:17:41.586 "raid_level": "raid1", 00:17:41.586 "superblock": true, 00:17:41.586 "num_base_bdevs": 2, 00:17:41.586 "num_base_bdevs_discovered": 1, 00:17:41.586 "num_base_bdevs_operational": 2, 00:17:41.586 "base_bdevs_list": [ 00:17:41.586 { 00:17:41.586 "name": "BaseBdev1", 00:17:41.586 "uuid": "4cbec772-768e-4550-933f-3a2dffcee622", 00:17:41.586 "is_configured": true, 00:17:41.586 "data_offset": 2048, 00:17:41.586 "data_size": 63488 00:17:41.586 }, 00:17:41.586 { 00:17:41.586 "name": "BaseBdev2", 00:17:41.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.586 "is_configured": false, 00:17:41.586 "data_offset": 0, 00:17:41.586 "data_size": 0 00:17:41.586 } 00:17:41.586 ] 00:17:41.586 }' 00:17:41.586 13:01:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.586 13:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:42.153 13:01:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:42.720 [2024-04-17 13:01:46.558373] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.720 [2024-04-17 13:01:46.558637] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:42.720 [2024-04-17 13:01:46.558655] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:42.720 [2024-04-17 13:01:46.558799] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:42.720 BaseBdev2 00:17:42.720 [2024-04-17 13:01:46.559194] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:42.720 [2024-04-17 13:01:46.559210] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:42.720 [2024-04-17 13:01:46.559370] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.720 13:01:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:42.720 13:01:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:42.720 13:01:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:42.720 13:01:46 -- common/autotest_common.sh@887 -- # local i 00:17:42.720 13:01:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:42.720 13:01:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:42.720 13:01:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.720 13:01:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.978 [ 00:17:42.978 { 00:17:42.978 "name": "BaseBdev2", 00:17:42.978 "aliases": [ 00:17:42.978 "c4402ebf-2589-4356-aa0e-4f66b067132e" 00:17:42.978 ], 00:17:42.978 "product_name": "Malloc disk", 00:17:42.978 "block_size": 512, 00:17:42.978 "num_blocks": 65536, 00:17:42.978 "uuid": "c4402ebf-2589-4356-aa0e-4f66b067132e", 00:17:42.978 "assigned_rate_limits": { 00:17:42.978 "rw_ios_per_sec": 0, 00:17:42.978 "rw_mbytes_per_sec": 0, 00:17:42.978 "r_mbytes_per_sec": 0, 00:17:42.978 "w_mbytes_per_sec": 0 00:17:42.978 }, 00:17:42.978 "claimed": true, 00:17:42.978 "claim_type": "exclusive_write", 00:17:42.978 "zoned": false, 00:17:42.978 "supported_io_types": { 00:17:42.978 "read": true, 00:17:42.978 "write": true, 00:17:42.978 "unmap": true, 00:17:42.978 "write_zeroes": true, 00:17:42.978 "flush": true, 00:17:42.978 "reset": true, 00:17:42.978 "compare": false, 00:17:42.978 "compare_and_write": false, 00:17:42.978 "abort": true, 00:17:42.978 "nvme_admin": false, 00:17:42.978 "nvme_io": false 00:17:42.978 }, 00:17:42.978 "memory_domains": [ 00:17:42.978 { 00:17:42.978 "dma_device_id": "system", 00:17:42.978 "dma_device_type": 1 00:17:42.978 }, 00:17:42.978 { 00:17:42.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.979 "dma_device_type": 2 00:17:42.979 } 00:17:42.979 ], 00:17:42.979 "driver_specific": {} 00:17:42.979 } 00:17:42.979 ] 00:17:42.979 13:01:47 -- common/autotest_common.sh@893 -- # return 0 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.979 13:01:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.237 13:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.237 "name": "Existed_Raid", 00:17:43.237 "uuid": "3bab133a-b4ab-47d4-9333-b33324cd626b", 00:17:43.237 "strip_size_kb": 0, 00:17:43.237 "state": "online", 00:17:43.237 "raid_level": "raid1", 00:17:43.237 "superblock": true, 00:17:43.237 "num_base_bdevs": 2, 00:17:43.237 "num_base_bdevs_discovered": 2, 00:17:43.237 "num_base_bdevs_operational": 2, 00:17:43.237 "base_bdevs_list": [ 00:17:43.237 { 00:17:43.237 "name": "BaseBdev1", 00:17:43.237 "uuid": "4cbec772-768e-4550-933f-3a2dffcee622", 00:17:43.237 "is_configured": true, 00:17:43.237 "data_offset": 2048, 00:17:43.237 "data_size": 63488 00:17:43.237 }, 00:17:43.237 { 00:17:43.237 "name": "BaseBdev2", 00:17:43.237 "uuid": "c4402ebf-2589-4356-aa0e-4f66b067132e", 00:17:43.237 "is_configured": true, 00:17:43.237 "data_offset": 2048, 00:17:43.237 "data_size": 63488 00:17:43.237 } 00:17:43.237 ] 00:17:43.237 }' 00:17:43.237 13:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.237 13:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:44.172 13:01:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:44.172 [2024-04-17 13:01:48.254939] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.431 13:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.690 13:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.690 "name": "Existed_Raid", 00:17:44.690 "uuid": "3bab133a-b4ab-47d4-9333-b33324cd626b", 00:17:44.690 "strip_size_kb": 0, 00:17:44.690 "state": "online", 00:17:44.690 "raid_level": "raid1", 00:17:44.690 "superblock": true, 00:17:44.690 "num_base_bdevs": 2, 00:17:44.690 "num_base_bdevs_discovered": 1, 00:17:44.690 "num_base_bdevs_operational": 1, 00:17:44.690 "base_bdevs_list": [ 00:17:44.690 { 00:17:44.690 "name": null, 00:17:44.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.690 "is_configured": false, 00:17:44.690 "data_offset": 2048, 00:17:44.690 "data_size": 63488 00:17:44.690 }, 00:17:44.690 { 00:17:44.690 "name": "BaseBdev2", 00:17:44.690 "uuid": "c4402ebf-2589-4356-aa0e-4f66b067132e", 00:17:44.690 "is_configured": true, 00:17:44.690 "data_offset": 2048, 00:17:44.690 "data_size": 63488 00:17:44.690 } 00:17:44.690 ] 00:17:44.690 }' 00:17:44.690 13:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.690 13:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:45.257 13:01:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:45.257 13:01:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:45.257 13:01:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.257 13:01:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:45.516 13:01:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:45.516 13:01:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.516 13:01:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:45.775 [2024-04-17 13:01:49.892142] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.775 [2024-04-17 13:01:49.892258] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.033 [2024-04-17 13:01:49.975937] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.033 [2024-04-17 13:01:49.976067] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.033 [2024-04-17 13:01:49.976082] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:46.033 13:01:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:46.033 13:01:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:46.033 13:01:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.033 13:01:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:46.301 13:01:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:46.301 13:01:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:46.301 13:01:50 -- bdev/bdev_raid.sh@287 -- # killprocess 121460 00:17:46.301 13:01:50 -- common/autotest_common.sh@924 -- # '[' -z 121460 ']' 00:17:46.301 13:01:50 -- common/autotest_common.sh@928 -- # kill -0 121460 00:17:46.301 13:01:50 -- common/autotest_common.sh@929 -- # uname 00:17:46.301 13:01:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:46.301 13:01:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121460 00:17:46.301 killing process with pid 121460 00:17:46.301 13:01:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:46.301 13:01:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:46.301 13:01:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121460' 00:17:46.301 13:01:50 -- common/autotest_common.sh@943 -- # kill 121460 00:17:46.301 13:01:50 -- common/autotest_common.sh@948 -- # wait 121460 00:17:46.301 [2024-04-17 13:01:50.277277] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.301 [2024-04-17 13:01:50.277404] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.680 ************************************ 00:17:47.680 END TEST raid_state_function_test_sb 00:17:47.680 ************************************ 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:47.680 00:17:47.680 real 0m11.846s 00:17:47.680 user 0m20.733s 00:17:47.680 sys 0m1.343s 00:17:47.680 13:01:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:47.680 13:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:17:47.680 13:01:51 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:47.680 13:01:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:47.680 13:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:47.680 ************************************ 00:17:47.680 START TEST raid_superblock_test 00:17:47.680 ************************************ 00:17:47.680 13:01:51 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid1 2 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:47.680 13:01:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:47.681 13:01:51 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:47.681 13:01:51 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:47.681 13:01:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=121827 00:17:47.681 13:01:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121827 /var/tmp/spdk-raid.sock 00:17:47.681 13:01:51 -- common/autotest_common.sh@817 -- # '[' -z 121827 ']' 00:17:47.681 13:01:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:47.681 13:01:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:47.681 13:01:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.681 13:01:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:47.681 13:01:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.681 13:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:47.681 [2024-04-17 13:01:51.534614] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:47.681 [2024-04-17 13:01:51.534814] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121827 ] 00:17:47.681 [2024-04-17 13:01:51.696128] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.940 [2024-04-17 13:01:51.901113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.199 [2024-04-17 13:01:52.098437] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.457 13:01:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.457 13:01:52 -- common/autotest_common.sh@850 -- # return 0 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:48.457 13:01:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:48.716 malloc1 00:17:48.716 13:01:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.003 [2024-04-17 13:01:52.979759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.003 [2024-04-17 13:01:52.979905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.003 [2024-04-17 13:01:52.979967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:49.003 [2024-04-17 13:01:52.980048] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.003 [2024-04-17 13:01:52.982379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.003 [2024-04-17 13:01:52.982442] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.003 pt1 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:49.003 13:01:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:49.270 malloc2 00:17:49.270 13:01:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.529 [2024-04-17 13:01:53.541552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.529 [2024-04-17 13:01:53.541662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.529 [2024-04-17 13:01:53.541712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:49.529 [2024-04-17 13:01:53.541770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.529 [2024-04-17 13:01:53.544288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.529 [2024-04-17 13:01:53.544342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.529 pt2 00:17:49.529 13:01:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:49.529 13:01:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:49.530 13:01:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:49.788 [2024-04-17 13:01:53.765677] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:49.788 [2024-04-17 13:01:53.767941] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.788 [2024-04-17 13:01:53.768211] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:17:49.788 [2024-04-17 13:01:53.768228] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:49.788 [2024-04-17 13:01:53.768361] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:49.788 [2024-04-17 13:01:53.768785] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:17:49.788 [2024-04-17 13:01:53.768810] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:17:49.788 [2024-04-17 13:01:53.768977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.788 13:01:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.047 13:01:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.047 "name": "raid_bdev1", 00:17:50.047 "uuid": "e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e", 00:17:50.047 "strip_size_kb": 0, 00:17:50.047 "state": "online", 00:17:50.047 "raid_level": "raid1", 00:17:50.047 "superblock": true, 00:17:50.047 "num_base_bdevs": 2, 00:17:50.047 "num_base_bdevs_discovered": 2, 00:17:50.047 "num_base_bdevs_operational": 2, 00:17:50.047 "base_bdevs_list": [ 00:17:50.047 { 00:17:50.047 "name": "pt1", 00:17:50.047 "uuid": "26abad15-ef23-51b8-b4c9-1566d3aaec4e", 00:17:50.047 "is_configured": true, 00:17:50.047 "data_offset": 2048, 00:17:50.047 "data_size": 63488 00:17:50.047 }, 00:17:50.047 { 00:17:50.047 "name": "pt2", 00:17:50.047 "uuid": "b5f906c5-e39a-58de-bb6f-5764a870e6f4", 00:17:50.047 "is_configured": true, 00:17:50.047 "data_offset": 2048, 00:17:50.047 "data_size": 63488 00:17:50.047 } 00:17:50.047 ] 00:17:50.047 }' 00:17:50.047 13:01:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.047 13:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:50.631 13:01:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:50.631 13:01:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:50.889 [2024-04-17 13:01:54.890083] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.889 13:01:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e 00:17:50.889 13:01:54 -- bdev/bdev_raid.sh@380 -- # '[' -z e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e ']' 00:17:50.889 13:01:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:51.148 [2024-04-17 13:01:55.125872] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.148 [2024-04-17 13:01:55.125908] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.148 [2024-04-17 13:01:55.126026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.148 [2024-04-17 13:01:55.126101] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.148 [2024-04-17 13:01:55.126116] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:17:51.148 13:01:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.148 13:01:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:51.406 13:01:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:51.406 13:01:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:51.406 13:01:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.406 13:01:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:51.664 13:01:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.664 13:01:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.923 13:01:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:51.923 13:01:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:52.181 13:01:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:52.181 13:01:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.181 13:01:56 -- common/autotest_common.sh@638 -- # local es=0 00:17:52.181 13:01:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.181 13:01:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.181 13:01:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:52.181 13:01:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.181 13:01:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:52.181 13:01:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.181 13:01:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:52.181 13:01:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.181 13:01:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:52.181 13:01:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:52.440 [2024-04-17 13:01:56.490186] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.440 [2024-04-17 13:01:56.492249] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.440 [2024-04-17 13:01:56.492330] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:52.440 [2024-04-17 13:01:56.492409] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:52.440 [2024-04-17 13:01:56.492448] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.440 [2024-04-17 13:01:56.492460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:17:52.440 request: 00:17:52.440 { 00:17:52.440 "name": "raid_bdev1", 00:17:52.440 "raid_level": "raid1", 00:17:52.440 "base_bdevs": [ 00:17:52.440 "malloc1", 00:17:52.440 "malloc2" 00:17:52.440 ], 00:17:52.440 "superblock": false, 00:17:52.440 "method": "bdev_raid_create", 00:17:52.440 "req_id": 1 00:17:52.440 } 00:17:52.440 Got JSON-RPC error response 00:17:52.440 response: 00:17:52.440 { 00:17:52.440 "code": -17, 00:17:52.440 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.440 } 00:17:52.440 13:01:56 -- common/autotest_common.sh@641 -- # es=1 00:17:52.440 13:01:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:52.440 13:01:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:52.440 13:01:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:52.440 13:01:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.440 13:01:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:52.698 13:01:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:52.698 13:01:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:52.698 13:01:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.956 [2024-04-17 13:01:56.934227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.956 [2024-04-17 13:01:56.934355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.956 [2024-04-17 13:01:56.934400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:52.956 [2024-04-17 13:01:56.934428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.956 [2024-04-17 13:01:56.936905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.956 [2024-04-17 13:01:56.936967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.956 [2024-04-17 13:01:56.937076] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:52.956 [2024-04-17 13:01:56.937147] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.956 pt1 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.956 13:01:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.214 13:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.214 "name": "raid_bdev1", 00:17:53.214 "uuid": "e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e", 00:17:53.214 "strip_size_kb": 0, 00:17:53.214 "state": "configuring", 00:17:53.214 "raid_level": "raid1", 00:17:53.214 "superblock": true, 00:17:53.214 "num_base_bdevs": 2, 00:17:53.214 "num_base_bdevs_discovered": 1, 00:17:53.214 "num_base_bdevs_operational": 2, 00:17:53.214 "base_bdevs_list": [ 00:17:53.214 { 00:17:53.214 "name": "pt1", 00:17:53.214 "uuid": "26abad15-ef23-51b8-b4c9-1566d3aaec4e", 00:17:53.214 "is_configured": true, 00:17:53.214 "data_offset": 2048, 00:17:53.214 "data_size": 63488 00:17:53.214 }, 00:17:53.214 { 00:17:53.214 "name": null, 00:17:53.214 "uuid": "b5f906c5-e39a-58de-bb6f-5764a870e6f4", 00:17:53.214 "is_configured": false, 00:17:53.214 "data_offset": 2048, 00:17:53.214 "data_size": 63488 00:17:53.214 } 00:17:53.214 ] 00:17:53.214 }' 00:17:53.214 13:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.214 13:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:53.781 13:01:57 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:17:53.781 13:01:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:53.781 13:01:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:53.781 13:01:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.040 [2024-04-17 13:01:58.086531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.040 [2024-04-17 13:01:58.086653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.040 [2024-04-17 13:01:58.086702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:54.040 [2024-04-17 13:01:58.086730] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.040 [2024-04-17 13:01:58.087250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.040 [2024-04-17 13:01:58.087306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.040 [2024-04-17 13:01:58.087413] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:54.040 [2024-04-17 13:01:58.087441] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.040 [2024-04-17 13:01:58.087568] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:54.040 [2024-04-17 13:01:58.087595] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:54.040 [2024-04-17 13:01:58.087721] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:54.040 [2024-04-17 13:01:58.088106] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:54.040 [2024-04-17 13:01:58.088130] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:54.040 [2024-04-17 13:01:58.088273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.040 pt2 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.040 13:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.299 13:01:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.299 "name": "raid_bdev1", 00:17:54.299 "uuid": "e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e", 00:17:54.299 "strip_size_kb": 0, 00:17:54.299 "state": "online", 00:17:54.299 "raid_level": "raid1", 00:17:54.299 "superblock": true, 00:17:54.299 "num_base_bdevs": 2, 00:17:54.299 "num_base_bdevs_discovered": 2, 00:17:54.299 "num_base_bdevs_operational": 2, 00:17:54.299 "base_bdevs_list": [ 00:17:54.299 { 00:17:54.299 "name": "pt1", 00:17:54.299 "uuid": "26abad15-ef23-51b8-b4c9-1566d3aaec4e", 00:17:54.299 "is_configured": true, 00:17:54.299 "data_offset": 2048, 00:17:54.299 "data_size": 63488 00:17:54.299 }, 00:17:54.299 { 00:17:54.299 "name": "pt2", 00:17:54.299 "uuid": "b5f906c5-e39a-58de-bb6f-5764a870e6f4", 00:17:54.299 "is_configured": true, 00:17:54.299 "data_offset": 2048, 00:17:54.299 "data_size": 63488 00:17:54.299 } 00:17:54.299 ] 00:17:54.299 }' 00:17:54.299 13:01:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.299 13:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:55.235 [2024-04-17 13:01:59.271029] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@430 -- # '[' e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e '!=' e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e ']' 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:55.235 13:01:59 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:55.494 [2024-04-17 13:01:59.530857] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.494 13:01:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.752 13:01:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.752 "name": "raid_bdev1", 00:17:55.752 "uuid": "e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e", 00:17:55.752 "strip_size_kb": 0, 00:17:55.752 "state": "online", 00:17:55.752 "raid_level": "raid1", 00:17:55.752 "superblock": true, 00:17:55.752 "num_base_bdevs": 2, 00:17:55.752 "num_base_bdevs_discovered": 1, 00:17:55.752 "num_base_bdevs_operational": 1, 00:17:55.752 "base_bdevs_list": [ 00:17:55.752 { 00:17:55.752 "name": null, 00:17:55.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.752 "is_configured": false, 00:17:55.752 "data_offset": 2048, 00:17:55.752 "data_size": 63488 00:17:55.752 }, 00:17:55.752 { 00:17:55.752 "name": "pt2", 00:17:55.752 "uuid": "b5f906c5-e39a-58de-bb6f-5764a870e6f4", 00:17:55.752 "is_configured": true, 00:17:55.752 "data_offset": 2048, 00:17:55.752 "data_size": 63488 00:17:55.752 } 00:17:55.752 ] 00:17:55.752 }' 00:17:55.752 13:01:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.752 13:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:56.685 13:02:00 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.686 [2024-04-17 13:02:00.675073] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.686 [2024-04-17 13:02:00.675118] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.686 [2024-04-17 13:02:00.675200] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.686 [2024-04-17 13:02:00.675259] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.686 [2024-04-17 13:02:00.675273] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:56.686 13:02:00 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.686 13:02:00 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:56.943 13:02:00 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:56.943 13:02:00 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:56.943 13:02:00 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:56.943 13:02:00 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:56.943 13:02:00 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@462 -- # i=1 00:17:57.201 13:02:01 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.201 [2024-04-17 13:02:01.343208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.201 [2024-04-17 13:02:01.343332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.201 [2024-04-17 13:02:01.343368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:57.201 [2024-04-17 13:02:01.343402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.201 [2024-04-17 13:02:01.345947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.201 [2024-04-17 13:02:01.346007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.201 [2024-04-17 13:02:01.346130] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:57.201 [2024-04-17 13:02:01.346189] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.201 [2024-04-17 13:02:01.346303] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:57.201 [2024-04-17 13:02:01.346326] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:57.459 [2024-04-17 13:02:01.346441] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:57.459 [2024-04-17 13:02:01.346781] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:57.459 [2024-04-17 13:02:01.346804] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:57.459 [2024-04-17 13:02:01.346943] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.459 pt2 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.459 13:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.718 13:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.718 "name": "raid_bdev1", 00:17:57.718 "uuid": "e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e", 00:17:57.718 "strip_size_kb": 0, 00:17:57.718 "state": "online", 00:17:57.718 "raid_level": "raid1", 00:17:57.718 "superblock": true, 00:17:57.718 "num_base_bdevs": 2, 00:17:57.718 "num_base_bdevs_discovered": 1, 00:17:57.718 "num_base_bdevs_operational": 1, 00:17:57.718 "base_bdevs_list": [ 00:17:57.718 { 00:17:57.718 "name": null, 00:17:57.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.718 "is_configured": false, 00:17:57.718 "data_offset": 2048, 00:17:57.718 "data_size": 63488 00:17:57.718 }, 00:17:57.718 { 00:17:57.718 "name": "pt2", 00:17:57.718 "uuid": "b5f906c5-e39a-58de-bb6f-5764a870e6f4", 00:17:57.718 "is_configured": true, 00:17:57.718 "data_offset": 2048, 00:17:57.718 "data_size": 63488 00:17:57.718 } 00:17:57.718 ] 00:17:57.718 }' 00:17:57.718 13:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.718 13:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:58.285 13:02:02 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:17:58.285 13:02:02 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:58.285 13:02:02 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:58.544 [2024-04-17 13:02:02.533003] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.544 13:02:02 -- bdev/bdev_raid.sh@506 -- # '[' e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e '!=' e8a297e8-db1e-4f92-9f6f-0eb1bd340f4e ']' 00:17:58.544 13:02:02 -- bdev/bdev_raid.sh@511 -- # killprocess 121827 00:17:58.544 13:02:02 -- common/autotest_common.sh@924 -- # '[' -z 121827 ']' 00:17:58.544 13:02:02 -- common/autotest_common.sh@928 -- # kill -0 121827 00:17:58.544 13:02:02 -- common/autotest_common.sh@929 -- # uname 00:17:58.544 13:02:02 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:58.544 13:02:02 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121827 00:17:58.544 killing process with pid 121827 00:17:58.544 13:02:02 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:58.544 13:02:02 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:58.544 13:02:02 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121827' 00:17:58.544 13:02:02 -- common/autotest_common.sh@943 -- # kill 121827 00:17:58.544 13:02:02 -- common/autotest_common.sh@948 -- # wait 121827 00:17:58.544 [2024-04-17 13:02:02.567881] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.544 [2024-04-17 13:02:02.567981] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.544 [2024-04-17 13:02:02.568042] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.544 [2024-04-17 13:02:02.568066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:58.802 [2024-04-17 13:02:02.733060] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.736 ************************************ 00:17:59.736 END TEST raid_superblock_test 00:17:59.736 ************************************ 00:17:59.736 13:02:03 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:59.736 00:17:59.736 real 0m12.381s 00:17:59.736 user 0m22.075s 00:17:59.736 sys 0m1.424s 00:17:59.736 13:02:03 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:17:59.736 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:59.995 13:02:03 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:59.995 13:02:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:59.995 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:59.995 ************************************ 00:17:59.995 START TEST raid_state_function_test 00:17:59.995 ************************************ 00:17:59.995 13:02:03 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 3 false 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=122220 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122220' 00:17:59.995 Process raid pid: 122220 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122220 /var/tmp/spdk-raid.sock 00:17:59.995 13:02:03 -- common/autotest_common.sh@817 -- # '[' -z 122220 ']' 00:17:59.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:59.995 13:02:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:59.995 13:02:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:59.995 13:02:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.995 13:02:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:59.995 13:02:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.995 13:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:59.995 [2024-04-17 13:02:03.998961] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:17:59.995 [2024-04-17 13:02:03.999175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.253 [2024-04-17 13:02:04.167551] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.253 [2024-04-17 13:02:04.394312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.511 [2024-04-17 13:02:04.592825] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.079 13:02:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.079 13:02:04 -- common/autotest_common.sh@850 -- # return 0 00:18:01.079 13:02:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:01.079 [2024-04-17 13:02:05.193084] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:01.079 [2024-04-17 13:02:05.193179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:01.079 [2024-04-17 13:02:05.193195] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.079 [2024-04-17 13:02:05.193214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.079 [2024-04-17 13:02:05.193222] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.079 [2024-04-17 13:02:05.193271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.079 13:02:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.338 13:02:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.338 "name": "Existed_Raid", 00:18:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.338 "strip_size_kb": 64, 00:18:01.338 "state": "configuring", 00:18:01.338 "raid_level": "raid0", 00:18:01.338 "superblock": false, 00:18:01.338 "num_base_bdevs": 3, 00:18:01.338 "num_base_bdevs_discovered": 0, 00:18:01.338 "num_base_bdevs_operational": 3, 00:18:01.338 "base_bdevs_list": [ 00:18:01.338 { 00:18:01.338 "name": "BaseBdev1", 00:18:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.338 "is_configured": false, 00:18:01.338 "data_offset": 0, 00:18:01.338 "data_size": 0 00:18:01.338 }, 00:18:01.338 { 00:18:01.338 "name": "BaseBdev2", 00:18:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.338 "is_configured": false, 00:18:01.338 "data_offset": 0, 00:18:01.338 "data_size": 0 00:18:01.338 }, 00:18:01.338 { 00:18:01.338 "name": "BaseBdev3", 00:18:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.338 "is_configured": false, 00:18:01.338 "data_offset": 0, 00:18:01.338 "data_size": 0 00:18:01.338 } 00:18:01.338 ] 00:18:01.338 }' 00:18:01.338 13:02:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.338 13:02:05 -- common/autotest_common.sh@10 -- # set +x 00:18:02.275 13:02:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.275 [2024-04-17 13:02:06.405193] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.275 [2024-04-17 13:02:06.405250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:02.532 13:02:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:02.532 [2024-04-17 13:02:06.673298] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.532 [2024-04-17 13:02:06.673387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.532 [2024-04-17 13:02:06.673401] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.532 [2024-04-17 13:02:06.673428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.532 [2024-04-17 13:02:06.673437] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.532 [2024-04-17 13:02:06.673463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.790 13:02:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:03.049 [2024-04-17 13:02:06.936484] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:03.049 BaseBdev1 00:18:03.049 13:02:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:03.049 13:02:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:03.049 13:02:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:03.049 13:02:06 -- common/autotest_common.sh@887 -- # local i 00:18:03.049 13:02:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:03.049 13:02:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:03.049 13:02:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.049 13:02:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:03.307 [ 00:18:03.307 { 00:18:03.307 "name": "BaseBdev1", 00:18:03.307 "aliases": [ 00:18:03.307 "174f4154-6707-4cc5-b1fe-aa5af4c545ce" 00:18:03.307 ], 00:18:03.307 "product_name": "Malloc disk", 00:18:03.307 "block_size": 512, 00:18:03.307 "num_blocks": 65536, 00:18:03.307 "uuid": "174f4154-6707-4cc5-b1fe-aa5af4c545ce", 00:18:03.307 "assigned_rate_limits": { 00:18:03.307 "rw_ios_per_sec": 0, 00:18:03.307 "rw_mbytes_per_sec": 0, 00:18:03.307 "r_mbytes_per_sec": 0, 00:18:03.307 "w_mbytes_per_sec": 0 00:18:03.307 }, 00:18:03.307 "claimed": true, 00:18:03.307 "claim_type": "exclusive_write", 00:18:03.307 "zoned": false, 00:18:03.307 "supported_io_types": { 00:18:03.307 "read": true, 00:18:03.307 "write": true, 00:18:03.307 "unmap": true, 00:18:03.307 "write_zeroes": true, 00:18:03.307 "flush": true, 00:18:03.308 "reset": true, 00:18:03.308 "compare": false, 00:18:03.308 "compare_and_write": false, 00:18:03.308 "abort": true, 00:18:03.308 "nvme_admin": false, 00:18:03.308 "nvme_io": false 00:18:03.308 }, 00:18:03.308 "memory_domains": [ 00:18:03.308 { 00:18:03.308 "dma_device_id": "system", 00:18:03.308 "dma_device_type": 1 00:18:03.308 }, 00:18:03.308 { 00:18:03.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.308 "dma_device_type": 2 00:18:03.308 } 00:18:03.308 ], 00:18:03.308 "driver_specific": {} 00:18:03.308 } 00:18:03.308 ] 00:18:03.308 13:02:07 -- common/autotest_common.sh@893 -- # return 0 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.308 13:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.566 13:02:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.566 "name": "Existed_Raid", 00:18:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.566 "strip_size_kb": 64, 00:18:03.566 "state": "configuring", 00:18:03.566 "raid_level": "raid0", 00:18:03.566 "superblock": false, 00:18:03.566 "num_base_bdevs": 3, 00:18:03.566 "num_base_bdevs_discovered": 1, 00:18:03.566 "num_base_bdevs_operational": 3, 00:18:03.566 "base_bdevs_list": [ 00:18:03.566 { 00:18:03.566 "name": "BaseBdev1", 00:18:03.566 "uuid": "174f4154-6707-4cc5-b1fe-aa5af4c545ce", 00:18:03.566 "is_configured": true, 00:18:03.566 "data_offset": 0, 00:18:03.566 "data_size": 65536 00:18:03.566 }, 00:18:03.566 { 00:18:03.566 "name": "BaseBdev2", 00:18:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.566 "is_configured": false, 00:18:03.566 "data_offset": 0, 00:18:03.566 "data_size": 0 00:18:03.566 }, 00:18:03.566 { 00:18:03.566 "name": "BaseBdev3", 00:18:03.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.567 "is_configured": false, 00:18:03.567 "data_offset": 0, 00:18:03.567 "data_size": 0 00:18:03.567 } 00:18:03.567 ] 00:18:03.567 }' 00:18:03.567 13:02:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.567 13:02:07 -- common/autotest_common.sh@10 -- # set +x 00:18:04.503 13:02:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.503 [2024-04-17 13:02:08.608915] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.503 [2024-04-17 13:02:08.608995] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:04.503 13:02:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:04.503 13:02:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:04.762 [2024-04-17 13:02:08.893046] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.762 [2024-04-17 13:02:08.895205] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.762 [2024-04-17 13:02:08.895271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.762 [2024-04-17 13:02:08.895284] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.762 [2024-04-17 13:02:08.895311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.019 13:02:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:05.019 13:02:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.020 13:02:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.277 13:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.277 "name": "Existed_Raid", 00:18:05.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.277 "strip_size_kb": 64, 00:18:05.277 "state": "configuring", 00:18:05.277 "raid_level": "raid0", 00:18:05.277 "superblock": false, 00:18:05.277 "num_base_bdevs": 3, 00:18:05.277 "num_base_bdevs_discovered": 1, 00:18:05.277 "num_base_bdevs_operational": 3, 00:18:05.277 "base_bdevs_list": [ 00:18:05.277 { 00:18:05.277 "name": "BaseBdev1", 00:18:05.278 "uuid": "174f4154-6707-4cc5-b1fe-aa5af4c545ce", 00:18:05.278 "is_configured": true, 00:18:05.278 "data_offset": 0, 00:18:05.278 "data_size": 65536 00:18:05.278 }, 00:18:05.278 { 00:18:05.278 "name": "BaseBdev2", 00:18:05.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.278 "is_configured": false, 00:18:05.278 "data_offset": 0, 00:18:05.278 "data_size": 0 00:18:05.278 }, 00:18:05.278 { 00:18:05.278 "name": "BaseBdev3", 00:18:05.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.278 "is_configured": false, 00:18:05.278 "data_offset": 0, 00:18:05.278 "data_size": 0 00:18:05.278 } 00:18:05.278 ] 00:18:05.278 }' 00:18:05.278 13:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.278 13:02:09 -- common/autotest_common.sh@10 -- # set +x 00:18:05.844 13:02:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.102 [2024-04-17 13:02:10.173911] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.102 BaseBdev2 00:18:06.102 13:02:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:06.102 13:02:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:06.102 13:02:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:06.102 13:02:10 -- common/autotest_common.sh@887 -- # local i 00:18:06.102 13:02:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:06.102 13:02:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:06.102 13:02:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.359 13:02:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.617 [ 00:18:06.617 { 00:18:06.617 "name": "BaseBdev2", 00:18:06.617 "aliases": [ 00:18:06.617 "eeee1581-03a8-47cc-b500-1b73575c9968" 00:18:06.617 ], 00:18:06.617 "product_name": "Malloc disk", 00:18:06.617 "block_size": 512, 00:18:06.617 "num_blocks": 65536, 00:18:06.617 "uuid": "eeee1581-03a8-47cc-b500-1b73575c9968", 00:18:06.617 "assigned_rate_limits": { 00:18:06.617 "rw_ios_per_sec": 0, 00:18:06.617 "rw_mbytes_per_sec": 0, 00:18:06.617 "r_mbytes_per_sec": 0, 00:18:06.617 "w_mbytes_per_sec": 0 00:18:06.617 }, 00:18:06.617 "claimed": true, 00:18:06.617 "claim_type": "exclusive_write", 00:18:06.617 "zoned": false, 00:18:06.617 "supported_io_types": { 00:18:06.617 "read": true, 00:18:06.617 "write": true, 00:18:06.617 "unmap": true, 00:18:06.617 "write_zeroes": true, 00:18:06.617 "flush": true, 00:18:06.617 "reset": true, 00:18:06.617 "compare": false, 00:18:06.617 "compare_and_write": false, 00:18:06.617 "abort": true, 00:18:06.617 "nvme_admin": false, 00:18:06.617 "nvme_io": false 00:18:06.617 }, 00:18:06.617 "memory_domains": [ 00:18:06.617 { 00:18:06.617 "dma_device_id": "system", 00:18:06.617 "dma_device_type": 1 00:18:06.617 }, 00:18:06.617 { 00:18:06.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.617 "dma_device_type": 2 00:18:06.617 } 00:18:06.617 ], 00:18:06.617 "driver_specific": {} 00:18:06.617 } 00:18:06.617 ] 00:18:06.617 13:02:10 -- common/autotest_common.sh@893 -- # return 0 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.617 13:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.875 13:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.875 "name": "Existed_Raid", 00:18:06.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.875 "strip_size_kb": 64, 00:18:06.875 "state": "configuring", 00:18:06.875 "raid_level": "raid0", 00:18:06.875 "superblock": false, 00:18:06.875 "num_base_bdevs": 3, 00:18:06.875 "num_base_bdevs_discovered": 2, 00:18:06.875 "num_base_bdevs_operational": 3, 00:18:06.875 "base_bdevs_list": [ 00:18:06.875 { 00:18:06.875 "name": "BaseBdev1", 00:18:06.875 "uuid": "174f4154-6707-4cc5-b1fe-aa5af4c545ce", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 65536 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev2", 00:18:06.875 "uuid": "eeee1581-03a8-47cc-b500-1b73575c9968", 00:18:06.875 "is_configured": true, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 65536 00:18:06.875 }, 00:18:06.875 { 00:18:06.875 "name": "BaseBdev3", 00:18:06.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.875 "is_configured": false, 00:18:06.875 "data_offset": 0, 00:18:06.875 "data_size": 0 00:18:06.875 } 00:18:06.875 ] 00:18:06.875 }' 00:18:06.875 13:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.875 13:02:10 -- common/autotest_common.sh@10 -- # set +x 00:18:07.443 13:02:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.010 [2024-04-17 13:02:11.859836] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.010 [2024-04-17 13:02:11.859893] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:08.010 [2024-04-17 13:02:11.859904] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:08.010 [2024-04-17 13:02:11.860061] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:18:08.010 [2024-04-17 13:02:11.860443] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:08.010 [2024-04-17 13:02:11.860469] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:08.010 [2024-04-17 13:02:11.860731] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.010 BaseBdev3 00:18:08.010 13:02:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:08.010 13:02:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:08.010 13:02:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:08.010 13:02:11 -- common/autotest_common.sh@887 -- # local i 00:18:08.010 13:02:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:08.010 13:02:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:08.010 13:02:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.010 13:02:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.269 [ 00:18:08.269 { 00:18:08.269 "name": "BaseBdev3", 00:18:08.269 "aliases": [ 00:18:08.269 "76848283-5d69-4054-8366-c64a5552a1cc" 00:18:08.269 ], 00:18:08.269 "product_name": "Malloc disk", 00:18:08.269 "block_size": 512, 00:18:08.269 "num_blocks": 65536, 00:18:08.269 "uuid": "76848283-5d69-4054-8366-c64a5552a1cc", 00:18:08.269 "assigned_rate_limits": { 00:18:08.269 "rw_ios_per_sec": 0, 00:18:08.269 "rw_mbytes_per_sec": 0, 00:18:08.269 "r_mbytes_per_sec": 0, 00:18:08.269 "w_mbytes_per_sec": 0 00:18:08.269 }, 00:18:08.269 "claimed": true, 00:18:08.269 "claim_type": "exclusive_write", 00:18:08.269 "zoned": false, 00:18:08.269 "supported_io_types": { 00:18:08.269 "read": true, 00:18:08.269 "write": true, 00:18:08.269 "unmap": true, 00:18:08.269 "write_zeroes": true, 00:18:08.269 "flush": true, 00:18:08.269 "reset": true, 00:18:08.269 "compare": false, 00:18:08.269 "compare_and_write": false, 00:18:08.269 "abort": true, 00:18:08.269 "nvme_admin": false, 00:18:08.269 "nvme_io": false 00:18:08.269 }, 00:18:08.269 "memory_domains": [ 00:18:08.269 { 00:18:08.269 "dma_device_id": "system", 00:18:08.269 "dma_device_type": 1 00:18:08.269 }, 00:18:08.269 { 00:18:08.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.269 "dma_device_type": 2 00:18:08.269 } 00:18:08.269 ], 00:18:08.269 "driver_specific": {} 00:18:08.269 } 00:18:08.269 ] 00:18:08.269 13:02:12 -- common/autotest_common.sh@893 -- # return 0 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.269 13:02:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.837 13:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.837 "name": "Existed_Raid", 00:18:08.837 "uuid": "5c00f24d-9f78-4d36-9a5e-5742d5b1e7ca", 00:18:08.837 "strip_size_kb": 64, 00:18:08.837 "state": "online", 00:18:08.837 "raid_level": "raid0", 00:18:08.837 "superblock": false, 00:18:08.837 "num_base_bdevs": 3, 00:18:08.837 "num_base_bdevs_discovered": 3, 00:18:08.837 "num_base_bdevs_operational": 3, 00:18:08.837 "base_bdevs_list": [ 00:18:08.837 { 00:18:08.837 "name": "BaseBdev1", 00:18:08.837 "uuid": "174f4154-6707-4cc5-b1fe-aa5af4c545ce", 00:18:08.837 "is_configured": true, 00:18:08.837 "data_offset": 0, 00:18:08.837 "data_size": 65536 00:18:08.837 }, 00:18:08.837 { 00:18:08.837 "name": "BaseBdev2", 00:18:08.837 "uuid": "eeee1581-03a8-47cc-b500-1b73575c9968", 00:18:08.837 "is_configured": true, 00:18:08.837 "data_offset": 0, 00:18:08.837 "data_size": 65536 00:18:08.837 }, 00:18:08.837 { 00:18:08.837 "name": "BaseBdev3", 00:18:08.837 "uuid": "76848283-5d69-4054-8366-c64a5552a1cc", 00:18:08.837 "is_configured": true, 00:18:08.837 "data_offset": 0, 00:18:08.837 "data_size": 65536 00:18:08.837 } 00:18:08.837 ] 00:18:08.837 }' 00:18:08.837 13:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.837 13:02:12 -- common/autotest_common.sh@10 -- # set +x 00:18:09.465 13:02:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:09.767 [2024-04-17 13:02:13.764433] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.767 [2024-04-17 13:02:13.764481] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.767 [2024-04-17 13:02:13.764552] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.767 13:02:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.026 13:02:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.026 "name": "Existed_Raid", 00:18:10.026 "uuid": "5c00f24d-9f78-4d36-9a5e-5742d5b1e7ca", 00:18:10.026 "strip_size_kb": 64, 00:18:10.026 "state": "offline", 00:18:10.026 "raid_level": "raid0", 00:18:10.026 "superblock": false, 00:18:10.026 "num_base_bdevs": 3, 00:18:10.026 "num_base_bdevs_discovered": 2, 00:18:10.026 "num_base_bdevs_operational": 2, 00:18:10.026 "base_bdevs_list": [ 00:18:10.026 { 00:18:10.026 "name": null, 00:18:10.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.026 "is_configured": false, 00:18:10.026 "data_offset": 0, 00:18:10.026 "data_size": 65536 00:18:10.026 }, 00:18:10.026 { 00:18:10.026 "name": "BaseBdev2", 00:18:10.026 "uuid": "eeee1581-03a8-47cc-b500-1b73575c9968", 00:18:10.026 "is_configured": true, 00:18:10.026 "data_offset": 0, 00:18:10.026 "data_size": 65536 00:18:10.026 }, 00:18:10.026 { 00:18:10.026 "name": "BaseBdev3", 00:18:10.026 "uuid": "76848283-5d69-4054-8366-c64a5552a1cc", 00:18:10.026 "is_configured": true, 00:18:10.026 "data_offset": 0, 00:18:10.026 "data_size": 65536 00:18:10.026 } 00:18:10.026 ] 00:18:10.026 }' 00:18:10.026 13:02:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.026 13:02:14 -- common/autotest_common.sh@10 -- # set +x 00:18:10.983 13:02:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:10.983 13:02:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:10.983 13:02:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.983 13:02:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:10.983 13:02:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:10.983 13:02:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.983 13:02:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:11.242 [2024-04-17 13:02:15.329852] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.501 13:02:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:11.501 13:02:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:11.501 13:02:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.501 13:02:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:11.760 13:02:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:11.760 13:02:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.760 13:02:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:12.019 [2024-04-17 13:02:15.910050] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:12.019 [2024-04-17 13:02:15.910122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:12.019 13:02:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:12.019 13:02:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:12.019 13:02:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.019 13:02:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:12.278 13:02:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:12.278 13:02:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:12.278 13:02:16 -- bdev/bdev_raid.sh@287 -- # killprocess 122220 00:18:12.278 13:02:16 -- common/autotest_common.sh@924 -- # '[' -z 122220 ']' 00:18:12.278 13:02:16 -- common/autotest_common.sh@928 -- # kill -0 122220 00:18:12.278 13:02:16 -- common/autotest_common.sh@929 -- # uname 00:18:12.278 13:02:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:12.278 13:02:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 122220 00:18:12.278 killing process with pid 122220 00:18:12.278 13:02:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:12.278 13:02:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:12.278 13:02:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 122220' 00:18:12.278 13:02:16 -- common/autotest_common.sh@943 -- # kill 122220 00:18:12.278 13:02:16 -- common/autotest_common.sh@948 -- # wait 122220 00:18:12.278 [2024-04-17 13:02:16.238654] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.278 [2024-04-17 13:02:16.238772] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.214 ************************************ 00:18:13.214 END TEST raid_state_function_test 00:18:13.214 ************************************ 00:18:13.214 13:02:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:13.214 00:18:13.214 real 0m13.417s 00:18:13.214 user 0m24.056s 00:18:13.214 sys 0m1.380s 00:18:13.214 13:02:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:18:13.214 13:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:18:13.481 13:02:17 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:13.481 13:02:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:13.481 13:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:13.481 ************************************ 00:18:13.481 START TEST raid_state_function_test_sb 00:18:13.481 ************************************ 00:18:13.481 13:02:17 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 3 true 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=122640 00:18:13.481 Process raid pid: 122640 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122640' 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122640 /var/tmp/spdk-raid.sock 00:18:13.481 13:02:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:13.481 13:02:17 -- common/autotest_common.sh@817 -- # '[' -z 122640 ']' 00:18:13.481 13:02:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:13.481 13:02:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.481 13:02:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:13.481 13:02:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.481 13:02:17 -- common/autotest_common.sh@10 -- # set +x 00:18:13.481 [2024-04-17 13:02:17.488376] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:18:13.481 [2024-04-17 13:02:17.488574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.767 [2024-04-17 13:02:17.647113] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.767 [2024-04-17 13:02:17.852876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.026 [2024-04-17 13:02:18.052653] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.594 13:02:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.594 13:02:18 -- common/autotest_common.sh@850 -- # return 0 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:14.594 [2024-04-17 13:02:18.701210] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.594 [2024-04-17 13:02:18.701296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.594 [2024-04-17 13:02:18.701312] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.594 [2024-04-17 13:02:18.701332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.594 [2024-04-17 13:02:18.701339] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.594 [2024-04-17 13:02:18.701383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.594 13:02:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.853 13:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.853 "name": "Existed_Raid", 00:18:14.853 "uuid": "7c0e23b5-21d6-439c-a8f6-c6731fe3fa89", 00:18:14.853 "strip_size_kb": 64, 00:18:14.853 "state": "configuring", 00:18:14.853 "raid_level": "raid0", 00:18:14.853 "superblock": true, 00:18:14.853 "num_base_bdevs": 3, 00:18:14.853 "num_base_bdevs_discovered": 0, 00:18:14.853 "num_base_bdevs_operational": 3, 00:18:14.853 "base_bdevs_list": [ 00:18:14.853 { 00:18:14.853 "name": "BaseBdev1", 00:18:14.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.853 "is_configured": false, 00:18:14.853 "data_offset": 0, 00:18:14.853 "data_size": 0 00:18:14.853 }, 00:18:14.853 { 00:18:14.853 "name": "BaseBdev2", 00:18:14.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.853 "is_configured": false, 00:18:14.853 "data_offset": 0, 00:18:14.853 "data_size": 0 00:18:14.853 }, 00:18:14.853 { 00:18:14.853 "name": "BaseBdev3", 00:18:14.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.853 "is_configured": false, 00:18:14.853 "data_offset": 0, 00:18:14.853 "data_size": 0 00:18:14.853 } 00:18:14.853 ] 00:18:14.853 }' 00:18:14.853 13:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.853 13:02:18 -- common/autotest_common.sh@10 -- # set +x 00:18:15.789 13:02:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.048 [2024-04-17 13:02:19.945343] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.048 [2024-04-17 13:02:19.945409] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:16.048 13:02:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:16.307 [2024-04-17 13:02:20.213462] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.307 [2024-04-17 13:02:20.213554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.307 [2024-04-17 13:02:20.213569] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.307 [2024-04-17 13:02:20.213600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.307 [2024-04-17 13:02:20.213617] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.307 [2024-04-17 13:02:20.213653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.307 13:02:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:16.566 [2024-04-17 13:02:20.477285] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.566 BaseBdev1 00:18:16.566 13:02:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:16.566 13:02:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:16.566 13:02:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:16.566 13:02:20 -- common/autotest_common.sh@887 -- # local i 00:18:16.566 13:02:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:16.566 13:02:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:16.566 13:02:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.825 13:02:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.825 [ 00:18:16.825 { 00:18:16.825 "name": "BaseBdev1", 00:18:16.825 "aliases": [ 00:18:16.825 "d179e9fe-ad9c-4e5f-a050-756d12d9b363" 00:18:16.825 ], 00:18:16.825 "product_name": "Malloc disk", 00:18:16.825 "block_size": 512, 00:18:16.825 "num_blocks": 65536, 00:18:16.825 "uuid": "d179e9fe-ad9c-4e5f-a050-756d12d9b363", 00:18:16.825 "assigned_rate_limits": { 00:18:16.825 "rw_ios_per_sec": 0, 00:18:16.825 "rw_mbytes_per_sec": 0, 00:18:16.825 "r_mbytes_per_sec": 0, 00:18:16.825 "w_mbytes_per_sec": 0 00:18:16.825 }, 00:18:16.825 "claimed": true, 00:18:16.825 "claim_type": "exclusive_write", 00:18:16.825 "zoned": false, 00:18:16.825 "supported_io_types": { 00:18:16.825 "read": true, 00:18:16.825 "write": true, 00:18:16.825 "unmap": true, 00:18:16.825 "write_zeroes": true, 00:18:16.825 "flush": true, 00:18:16.825 "reset": true, 00:18:16.825 "compare": false, 00:18:16.825 "compare_and_write": false, 00:18:16.825 "abort": true, 00:18:16.825 "nvme_admin": false, 00:18:16.825 "nvme_io": false 00:18:16.825 }, 00:18:16.825 "memory_domains": [ 00:18:16.825 { 00:18:16.825 "dma_device_id": "system", 00:18:16.825 "dma_device_type": 1 00:18:16.825 }, 00:18:16.825 { 00:18:16.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.825 "dma_device_type": 2 00:18:16.825 } 00:18:16.825 ], 00:18:16.825 "driver_specific": {} 00:18:16.825 } 00:18:16.825 ] 00:18:17.084 13:02:20 -- common/autotest_common.sh@893 -- # return 0 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.084 13:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.343 13:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.343 "name": "Existed_Raid", 00:18:17.343 "uuid": "6b9de2b0-2e79-42a9-ae90-fcdcbec08cb0", 00:18:17.343 "strip_size_kb": 64, 00:18:17.343 "state": "configuring", 00:18:17.343 "raid_level": "raid0", 00:18:17.343 "superblock": true, 00:18:17.343 "num_base_bdevs": 3, 00:18:17.343 "num_base_bdevs_discovered": 1, 00:18:17.343 "num_base_bdevs_operational": 3, 00:18:17.343 "base_bdevs_list": [ 00:18:17.343 { 00:18:17.343 "name": "BaseBdev1", 00:18:17.343 "uuid": "d179e9fe-ad9c-4e5f-a050-756d12d9b363", 00:18:17.343 "is_configured": true, 00:18:17.343 "data_offset": 2048, 00:18:17.343 "data_size": 63488 00:18:17.343 }, 00:18:17.343 { 00:18:17.343 "name": "BaseBdev2", 00:18:17.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.343 "is_configured": false, 00:18:17.343 "data_offset": 0, 00:18:17.343 "data_size": 0 00:18:17.343 }, 00:18:17.343 { 00:18:17.343 "name": "BaseBdev3", 00:18:17.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.343 "is_configured": false, 00:18:17.343 "data_offset": 0, 00:18:17.343 "data_size": 0 00:18:17.343 } 00:18:17.343 ] 00:18:17.343 }' 00:18:17.343 13:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.343 13:02:21 -- common/autotest_common.sh@10 -- # set +x 00:18:17.909 13:02:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.167 [2024-04-17 13:02:22.117729] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.168 [2024-04-17 13:02:22.117816] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:18.168 13:02:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:18.168 13:02:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:18.425 13:02:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:18.684 BaseBdev1 00:18:18.684 13:02:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:18.684 13:02:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:18.684 13:02:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:18.684 13:02:22 -- common/autotest_common.sh@887 -- # local i 00:18:18.684 13:02:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:18.684 13:02:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:18.684 13:02:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.942 13:02:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.200 [ 00:18:19.200 { 00:18:19.200 "name": "BaseBdev1", 00:18:19.200 "aliases": [ 00:18:19.200 "80e07bf1-b36f-4a59-921e-f01d91dd5387" 00:18:19.200 ], 00:18:19.200 "product_name": "Malloc disk", 00:18:19.200 "block_size": 512, 00:18:19.200 "num_blocks": 65536, 00:18:19.200 "uuid": "80e07bf1-b36f-4a59-921e-f01d91dd5387", 00:18:19.200 "assigned_rate_limits": { 00:18:19.200 "rw_ios_per_sec": 0, 00:18:19.200 "rw_mbytes_per_sec": 0, 00:18:19.200 "r_mbytes_per_sec": 0, 00:18:19.200 "w_mbytes_per_sec": 0 00:18:19.200 }, 00:18:19.200 "claimed": false, 00:18:19.200 "zoned": false, 00:18:19.200 "supported_io_types": { 00:18:19.200 "read": true, 00:18:19.200 "write": true, 00:18:19.200 "unmap": true, 00:18:19.200 "write_zeroes": true, 00:18:19.200 "flush": true, 00:18:19.200 "reset": true, 00:18:19.200 "compare": false, 00:18:19.200 "compare_and_write": false, 00:18:19.200 "abort": true, 00:18:19.200 "nvme_admin": false, 00:18:19.200 "nvme_io": false 00:18:19.200 }, 00:18:19.200 "memory_domains": [ 00:18:19.200 { 00:18:19.200 "dma_device_id": "system", 00:18:19.200 "dma_device_type": 1 00:18:19.200 }, 00:18:19.200 { 00:18:19.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.200 "dma_device_type": 2 00:18:19.200 } 00:18:19.200 ], 00:18:19.200 "driver_specific": {} 00:18:19.200 } 00:18:19.200 ] 00:18:19.200 13:02:23 -- common/autotest_common.sh@893 -- # return 0 00:18:19.200 13:02:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:19.458 [2024-04-17 13:02:23.395483] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.458 [2024-04-17 13:02:23.397658] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.458 [2024-04-17 13:02:23.397739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.458 [2024-04-17 13:02:23.397752] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:19.458 [2024-04-17 13:02:23.397783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.458 13:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.715 13:02:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.715 "name": "Existed_Raid", 00:18:19.715 "uuid": "c139d3f7-814f-4946-8101-6ac007abde06", 00:18:19.715 "strip_size_kb": 64, 00:18:19.715 "state": "configuring", 00:18:19.715 "raid_level": "raid0", 00:18:19.715 "superblock": true, 00:18:19.715 "num_base_bdevs": 3, 00:18:19.715 "num_base_bdevs_discovered": 1, 00:18:19.715 "num_base_bdevs_operational": 3, 00:18:19.715 "base_bdevs_list": [ 00:18:19.715 { 00:18:19.715 "name": "BaseBdev1", 00:18:19.715 "uuid": "80e07bf1-b36f-4a59-921e-f01d91dd5387", 00:18:19.715 "is_configured": true, 00:18:19.716 "data_offset": 2048, 00:18:19.716 "data_size": 63488 00:18:19.716 }, 00:18:19.716 { 00:18:19.716 "name": "BaseBdev2", 00:18:19.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.716 "is_configured": false, 00:18:19.716 "data_offset": 0, 00:18:19.716 "data_size": 0 00:18:19.716 }, 00:18:19.716 { 00:18:19.716 "name": "BaseBdev3", 00:18:19.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.716 "is_configured": false, 00:18:19.716 "data_offset": 0, 00:18:19.716 "data_size": 0 00:18:19.716 } 00:18:19.716 ] 00:18:19.716 }' 00:18:19.716 13:02:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.716 13:02:23 -- common/autotest_common.sh@10 -- # set +x 00:18:20.288 13:02:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:20.551 [2024-04-17 13:02:24.611009] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.551 BaseBdev2 00:18:20.551 13:02:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:20.551 13:02:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:20.551 13:02:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:20.551 13:02:24 -- common/autotest_common.sh@887 -- # local i 00:18:20.551 13:02:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:20.551 13:02:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:20.551 13:02:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.810 13:02:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:21.069 [ 00:18:21.069 { 00:18:21.069 "name": "BaseBdev2", 00:18:21.070 "aliases": [ 00:18:21.070 "0b8dfeea-8d45-4363-951e-959538fefdd5" 00:18:21.070 ], 00:18:21.070 "product_name": "Malloc disk", 00:18:21.070 "block_size": 512, 00:18:21.070 "num_blocks": 65536, 00:18:21.070 "uuid": "0b8dfeea-8d45-4363-951e-959538fefdd5", 00:18:21.070 "assigned_rate_limits": { 00:18:21.070 "rw_ios_per_sec": 0, 00:18:21.070 "rw_mbytes_per_sec": 0, 00:18:21.070 "r_mbytes_per_sec": 0, 00:18:21.070 "w_mbytes_per_sec": 0 00:18:21.070 }, 00:18:21.070 "claimed": true, 00:18:21.070 "claim_type": "exclusive_write", 00:18:21.070 "zoned": false, 00:18:21.070 "supported_io_types": { 00:18:21.070 "read": true, 00:18:21.070 "write": true, 00:18:21.070 "unmap": true, 00:18:21.070 "write_zeroes": true, 00:18:21.070 "flush": true, 00:18:21.070 "reset": true, 00:18:21.070 "compare": false, 00:18:21.070 "compare_and_write": false, 00:18:21.070 "abort": true, 00:18:21.070 "nvme_admin": false, 00:18:21.070 "nvme_io": false 00:18:21.070 }, 00:18:21.070 "memory_domains": [ 00:18:21.070 { 00:18:21.070 "dma_device_id": "system", 00:18:21.070 "dma_device_type": 1 00:18:21.070 }, 00:18:21.070 { 00:18:21.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.070 "dma_device_type": 2 00:18:21.070 } 00:18:21.070 ], 00:18:21.070 "driver_specific": {} 00:18:21.070 } 00:18:21.070 ] 00:18:21.070 13:02:25 -- common/autotest_common.sh@893 -- # return 0 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.070 13:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.329 13:02:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.329 "name": "Existed_Raid", 00:18:21.329 "uuid": "c139d3f7-814f-4946-8101-6ac007abde06", 00:18:21.329 "strip_size_kb": 64, 00:18:21.329 "state": "configuring", 00:18:21.329 "raid_level": "raid0", 00:18:21.329 "superblock": true, 00:18:21.329 "num_base_bdevs": 3, 00:18:21.329 "num_base_bdevs_discovered": 2, 00:18:21.329 "num_base_bdevs_operational": 3, 00:18:21.329 "base_bdevs_list": [ 00:18:21.329 { 00:18:21.329 "name": "BaseBdev1", 00:18:21.329 "uuid": "80e07bf1-b36f-4a59-921e-f01d91dd5387", 00:18:21.329 "is_configured": true, 00:18:21.329 "data_offset": 2048, 00:18:21.329 "data_size": 63488 00:18:21.329 }, 00:18:21.329 { 00:18:21.329 "name": "BaseBdev2", 00:18:21.329 "uuid": "0b8dfeea-8d45-4363-951e-959538fefdd5", 00:18:21.329 "is_configured": true, 00:18:21.329 "data_offset": 2048, 00:18:21.329 "data_size": 63488 00:18:21.329 }, 00:18:21.329 { 00:18:21.329 "name": "BaseBdev3", 00:18:21.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.329 "is_configured": false, 00:18:21.329 "data_offset": 0, 00:18:21.329 "data_size": 0 00:18:21.329 } 00:18:21.329 ] 00:18:21.329 }' 00:18:21.329 13:02:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.329 13:02:25 -- common/autotest_common.sh@10 -- # set +x 00:18:21.896 13:02:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:22.155 [2024-04-17 13:02:26.240340] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.155 [2024-04-17 13:02:26.240579] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:22.155 [2024-04-17 13:02:26.240596] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.155 [2024-04-17 13:02:26.240748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:22.155 BaseBdev3 00:18:22.156 [2024-04-17 13:02:26.241093] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:22.156 [2024-04-17 13:02:26.241108] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:22.156 [2024-04-17 13:02:26.241251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.156 13:02:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:22.156 13:02:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:22.156 13:02:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:22.156 13:02:26 -- common/autotest_common.sh@887 -- # local i 00:18:22.156 13:02:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:22.156 13:02:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:22.156 13:02:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.415 13:02:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:22.674 [ 00:18:22.674 { 00:18:22.674 "name": "BaseBdev3", 00:18:22.674 "aliases": [ 00:18:22.674 "ec220a93-3afa-4847-815b-2a1250a21b4a" 00:18:22.674 ], 00:18:22.674 "product_name": "Malloc disk", 00:18:22.674 "block_size": 512, 00:18:22.674 "num_blocks": 65536, 00:18:22.674 "uuid": "ec220a93-3afa-4847-815b-2a1250a21b4a", 00:18:22.674 "assigned_rate_limits": { 00:18:22.674 "rw_ios_per_sec": 0, 00:18:22.674 "rw_mbytes_per_sec": 0, 00:18:22.674 "r_mbytes_per_sec": 0, 00:18:22.674 "w_mbytes_per_sec": 0 00:18:22.674 }, 00:18:22.674 "claimed": true, 00:18:22.674 "claim_type": "exclusive_write", 00:18:22.674 "zoned": false, 00:18:22.674 "supported_io_types": { 00:18:22.674 "read": true, 00:18:22.674 "write": true, 00:18:22.674 "unmap": true, 00:18:22.674 "write_zeroes": true, 00:18:22.674 "flush": true, 00:18:22.674 "reset": true, 00:18:22.674 "compare": false, 00:18:22.674 "compare_and_write": false, 00:18:22.674 "abort": true, 00:18:22.674 "nvme_admin": false, 00:18:22.674 "nvme_io": false 00:18:22.674 }, 00:18:22.674 "memory_domains": [ 00:18:22.674 { 00:18:22.674 "dma_device_id": "system", 00:18:22.674 "dma_device_type": 1 00:18:22.674 }, 00:18:22.674 { 00:18:22.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.674 "dma_device_type": 2 00:18:22.674 } 00:18:22.674 ], 00:18:22.674 "driver_specific": {} 00:18:22.674 } 00:18:22.674 ] 00:18:22.674 13:02:26 -- common/autotest_common.sh@893 -- # return 0 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.674 13:02:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.933 13:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.933 "name": "Existed_Raid", 00:18:22.933 "uuid": "c139d3f7-814f-4946-8101-6ac007abde06", 00:18:22.933 "strip_size_kb": 64, 00:18:22.933 "state": "online", 00:18:22.933 "raid_level": "raid0", 00:18:22.933 "superblock": true, 00:18:22.933 "num_base_bdevs": 3, 00:18:22.933 "num_base_bdevs_discovered": 3, 00:18:22.933 "num_base_bdevs_operational": 3, 00:18:22.933 "base_bdevs_list": [ 00:18:22.933 { 00:18:22.933 "name": "BaseBdev1", 00:18:22.933 "uuid": "80e07bf1-b36f-4a59-921e-f01d91dd5387", 00:18:22.933 "is_configured": true, 00:18:22.933 "data_offset": 2048, 00:18:22.933 "data_size": 63488 00:18:22.933 }, 00:18:22.933 { 00:18:22.933 "name": "BaseBdev2", 00:18:22.933 "uuid": "0b8dfeea-8d45-4363-951e-959538fefdd5", 00:18:22.933 "is_configured": true, 00:18:22.933 "data_offset": 2048, 00:18:22.933 "data_size": 63488 00:18:22.933 }, 00:18:22.933 { 00:18:22.933 "name": "BaseBdev3", 00:18:22.933 "uuid": "ec220a93-3afa-4847-815b-2a1250a21b4a", 00:18:22.933 "is_configured": true, 00:18:22.933 "data_offset": 2048, 00:18:22.933 "data_size": 63488 00:18:22.933 } 00:18:22.933 ] 00:18:22.933 }' 00:18:22.933 13:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.933 13:02:26 -- common/autotest_common.sh@10 -- # set +x 00:18:23.876 13:02:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.876 [2024-04-17 13:02:27.948878] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.876 [2024-04-17 13:02:27.948927] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.876 [2024-04-17 13:02:27.948984] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.135 13:02:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.394 13:02:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.394 "name": "Existed_Raid", 00:18:24.394 "uuid": "c139d3f7-814f-4946-8101-6ac007abde06", 00:18:24.394 "strip_size_kb": 64, 00:18:24.394 "state": "offline", 00:18:24.394 "raid_level": "raid0", 00:18:24.394 "superblock": true, 00:18:24.394 "num_base_bdevs": 3, 00:18:24.394 "num_base_bdevs_discovered": 2, 00:18:24.394 "num_base_bdevs_operational": 2, 00:18:24.394 "base_bdevs_list": [ 00:18:24.394 { 00:18:24.394 "name": null, 00:18:24.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.394 "is_configured": false, 00:18:24.394 "data_offset": 2048, 00:18:24.394 "data_size": 63488 00:18:24.394 }, 00:18:24.394 { 00:18:24.394 "name": "BaseBdev2", 00:18:24.394 "uuid": "0b8dfeea-8d45-4363-951e-959538fefdd5", 00:18:24.394 "is_configured": true, 00:18:24.394 "data_offset": 2048, 00:18:24.394 "data_size": 63488 00:18:24.394 }, 00:18:24.394 { 00:18:24.394 "name": "BaseBdev3", 00:18:24.394 "uuid": "ec220a93-3afa-4847-815b-2a1250a21b4a", 00:18:24.394 "is_configured": true, 00:18:24.394 "data_offset": 2048, 00:18:24.394 "data_size": 63488 00:18:24.394 } 00:18:24.394 ] 00:18:24.394 }' 00:18:24.394 13:02:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.394 13:02:28 -- common/autotest_common.sh@10 -- # set +x 00:18:25.012 13:02:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:25.012 13:02:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:25.012 13:02:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.012 13:02:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:25.271 13:02:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:25.271 13:02:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:25.271 13:02:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:25.530 [2024-04-17 13:02:29.527961] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:25.530 13:02:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:25.530 13:02:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:25.530 13:02:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.530 13:02:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:25.789 13:02:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:25.789 13:02:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:25.789 13:02:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:26.047 [2024-04-17 13:02:30.175355] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:26.047 [2024-04-17 13:02:30.175433] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:26.306 13:02:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:26.306 13:02:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:26.306 13:02:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.306 13:02:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:26.566 13:02:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:26.566 13:02:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:26.566 13:02:30 -- bdev/bdev_raid.sh@287 -- # killprocess 122640 00:18:26.566 13:02:30 -- common/autotest_common.sh@924 -- # '[' -z 122640 ']' 00:18:26.566 13:02:30 -- common/autotest_common.sh@928 -- # kill -0 122640 00:18:26.566 13:02:30 -- common/autotest_common.sh@929 -- # uname 00:18:26.566 13:02:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:26.566 13:02:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 122640 00:18:26.566 killing process with pid 122640 00:18:26.566 13:02:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:26.566 13:02:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:26.566 13:02:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 122640' 00:18:26.566 13:02:30 -- common/autotest_common.sh@943 -- # kill 122640 00:18:26.566 13:02:30 -- common/autotest_common.sh@948 -- # wait 122640 00:18:26.566 [2024-04-17 13:02:30.539593] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.566 [2024-04-17 13:02:30.539733] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.945 ************************************ 00:18:27.945 END TEST raid_state_function_test_sb 00:18:27.945 ************************************ 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:27.945 00:18:27.945 real 0m14.303s 00:18:27.945 user 0m25.454s 00:18:27.945 sys 0m1.548s 00:18:27.945 13:02:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:18:27.945 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:27.945 13:02:31 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:18:27.945 13:02:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:27.945 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:18:27.945 ************************************ 00:18:27.945 START TEST raid_superblock_test 00:18:27.945 ************************************ 00:18:27.945 13:02:31 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid0 3 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:27.945 13:02:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=123068 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123068 /var/tmp/spdk-raid.sock 00:18:27.946 13:02:31 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:27.946 13:02:31 -- common/autotest_common.sh@817 -- # '[' -z 123068 ']' 00:18:27.946 13:02:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:27.946 13:02:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:27.946 13:02:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:27.946 13:02:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.946 13:02:31 -- common/autotest_common.sh@10 -- # set +x 00:18:27.946 [2024-04-17 13:02:31.872313] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:18:27.946 [2024-04-17 13:02:31.872507] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123068 ] 00:18:27.946 [2024-04-17 13:02:32.039986] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.205 [2024-04-17 13:02:32.268748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.528 [2024-04-17 13:02:32.462646] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.787 13:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.787 13:02:32 -- common/autotest_common.sh@850 -- # return 0 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.787 13:02:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:29.045 malloc1 00:18:29.045 13:02:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:29.303 [2024-04-17 13:02:33.321763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:29.303 [2024-04-17 13:02:33.321882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.303 [2024-04-17 13:02:33.321919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:29.303 [2024-04-17 13:02:33.321975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.304 [2024-04-17 13:02:33.324531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.304 [2024-04-17 13:02:33.324586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:29.304 pt1 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.304 13:02:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:29.562 malloc2 00:18:29.562 13:02:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:29.821 [2024-04-17 13:02:33.863914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:29.821 [2024-04-17 13:02:33.864019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.821 [2024-04-17 13:02:33.864070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:29.821 [2024-04-17 13:02:33.864129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.821 [2024-04-17 13:02:33.866658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.821 [2024-04-17 13:02:33.866712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:29.821 pt2 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:29.821 13:02:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:30.080 malloc3 00:18:30.081 13:02:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:30.338 [2024-04-17 13:02:34.370737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:30.338 [2024-04-17 13:02:34.370857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.338 [2024-04-17 13:02:34.370908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:30.338 [2024-04-17 13:02:34.370962] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.338 [2024-04-17 13:02:34.373580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.338 [2024-04-17 13:02:34.373652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:30.338 pt3 00:18:30.338 13:02:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:30.338 13:02:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:30.338 13:02:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:30.603 [2024-04-17 13:02:34.602802] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:30.604 [2024-04-17 13:02:34.604899] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.604 [2024-04-17 13:02:34.604979] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:30.604 [2024-04-17 13:02:34.605206] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:30.604 [2024-04-17 13:02:34.605229] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:30.604 [2024-04-17 13:02:34.605401] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:30.604 [2024-04-17 13:02:34.605806] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:30.604 [2024-04-17 13:02:34.605827] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:30.604 [2024-04-17 13:02:34.605985] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.604 13:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.867 13:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.867 "name": "raid_bdev1", 00:18:30.867 "uuid": "63e7ec9f-d03a-480a-ae73-f0d1d9101a3e", 00:18:30.867 "strip_size_kb": 64, 00:18:30.867 "state": "online", 00:18:30.867 "raid_level": "raid0", 00:18:30.867 "superblock": true, 00:18:30.867 "num_base_bdevs": 3, 00:18:30.867 "num_base_bdevs_discovered": 3, 00:18:30.867 "num_base_bdevs_operational": 3, 00:18:30.867 "base_bdevs_list": [ 00:18:30.867 { 00:18:30.867 "name": "pt1", 00:18:30.867 "uuid": "71702deb-c84e-589c-9c72-6070758ed28a", 00:18:30.867 "is_configured": true, 00:18:30.867 "data_offset": 2048, 00:18:30.867 "data_size": 63488 00:18:30.867 }, 00:18:30.867 { 00:18:30.867 "name": "pt2", 00:18:30.867 "uuid": "7ad1add6-36c7-5366-a8c7-b679f8fac3df", 00:18:30.867 "is_configured": true, 00:18:30.867 "data_offset": 2048, 00:18:30.867 "data_size": 63488 00:18:30.867 }, 00:18:30.867 { 00:18:30.867 "name": "pt3", 00:18:30.867 "uuid": "568ec260-dfc3-5458-9a34-f73f33b41f6f", 00:18:30.867 "is_configured": true, 00:18:30.867 "data_offset": 2048, 00:18:30.867 "data_size": 63488 00:18:30.867 } 00:18:30.867 ] 00:18:30.867 }' 00:18:30.867 13:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.867 13:02:34 -- common/autotest_common.sh@10 -- # set +x 00:18:31.802 13:02:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:31.802 13:02:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:31.802 [2024-04-17 13:02:35.791325] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.802 13:02:35 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=63e7ec9f-d03a-480a-ae73-f0d1d9101a3e 00:18:31.802 13:02:35 -- bdev/bdev_raid.sh@380 -- # '[' -z 63e7ec9f-d03a-480a-ae73-f0d1d9101a3e ']' 00:18:31.802 13:02:35 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:32.074 [2024-04-17 13:02:36.011069] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.074 [2024-04-17 13:02:36.011108] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.074 [2024-04-17 13:02:36.011222] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.074 [2024-04-17 13:02:36.011319] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.074 [2024-04-17 13:02:36.011332] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:32.074 13:02:36 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.074 13:02:36 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.364 13:02:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:32.622 13:02:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:32.622 13:02:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:32.881 13:02:36 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:32.881 13:02:36 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:33.140 13:02:37 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:33.140 13:02:37 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:33.140 13:02:37 -- common/autotest_common.sh@638 -- # local es=0 00:18:33.140 13:02:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:33.140 13:02:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.140 13:02:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:33.140 13:02:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.140 13:02:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:33.140 13:02:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.140 13:02:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:33.140 13:02:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.140 13:02:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:33.140 13:02:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:33.399 [2024-04-17 13:02:37.423435] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:33.399 [2024-04-17 13:02:37.425554] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:33.399 [2024-04-17 13:02:37.425619] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:33.399 [2024-04-17 13:02:37.425676] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:33.399 [2024-04-17 13:02:37.425755] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:33.399 [2024-04-17 13:02:37.425792] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:33.399 [2024-04-17 13:02:37.425843] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.399 [2024-04-17 13:02:37.425855] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:18:33.399 request: 00:18:33.399 { 00:18:33.399 "name": "raid_bdev1", 00:18:33.399 "raid_level": "raid0", 00:18:33.399 "base_bdevs": [ 00:18:33.399 "malloc1", 00:18:33.399 "malloc2", 00:18:33.399 "malloc3" 00:18:33.399 ], 00:18:33.399 "superblock": false, 00:18:33.399 "strip_size_kb": 64, 00:18:33.399 "method": "bdev_raid_create", 00:18:33.399 "req_id": 1 00:18:33.399 } 00:18:33.399 Got JSON-RPC error response 00:18:33.399 response: 00:18:33.399 { 00:18:33.399 "code": -17, 00:18:33.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:33.399 } 00:18:33.399 13:02:37 -- common/autotest_common.sh@641 -- # es=1 00:18:33.399 13:02:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:33.399 13:02:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:33.399 13:02:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:33.399 13:02:37 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.399 13:02:37 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:33.657 13:02:37 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:33.657 13:02:37 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:33.657 13:02:37 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:33.916 [2024-04-17 13:02:37.867454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:33.916 [2024-04-17 13:02:37.867548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.916 [2024-04-17 13:02:37.867591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:33.916 [2024-04-17 13:02:37.867614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.916 [2024-04-17 13:02:37.870027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.916 [2024-04-17 13:02:37.870079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:33.916 [2024-04-17 13:02:37.870209] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:33.916 [2024-04-17 13:02:37.870274] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:33.916 pt1 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.916 13:02:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.175 13:02:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.175 "name": "raid_bdev1", 00:18:34.175 "uuid": "63e7ec9f-d03a-480a-ae73-f0d1d9101a3e", 00:18:34.175 "strip_size_kb": 64, 00:18:34.175 "state": "configuring", 00:18:34.175 "raid_level": "raid0", 00:18:34.175 "superblock": true, 00:18:34.175 "num_base_bdevs": 3, 00:18:34.175 "num_base_bdevs_discovered": 1, 00:18:34.175 "num_base_bdevs_operational": 3, 00:18:34.175 "base_bdevs_list": [ 00:18:34.175 { 00:18:34.175 "name": "pt1", 00:18:34.175 "uuid": "71702deb-c84e-589c-9c72-6070758ed28a", 00:18:34.175 "is_configured": true, 00:18:34.175 "data_offset": 2048, 00:18:34.175 "data_size": 63488 00:18:34.175 }, 00:18:34.175 { 00:18:34.175 "name": null, 00:18:34.175 "uuid": "7ad1add6-36c7-5366-a8c7-b679f8fac3df", 00:18:34.175 "is_configured": false, 00:18:34.175 "data_offset": 2048, 00:18:34.175 "data_size": 63488 00:18:34.175 }, 00:18:34.175 { 00:18:34.175 "name": null, 00:18:34.175 "uuid": "568ec260-dfc3-5458-9a34-f73f33b41f6f", 00:18:34.175 "is_configured": false, 00:18:34.175 "data_offset": 2048, 00:18:34.175 "data_size": 63488 00:18:34.175 } 00:18:34.175 ] 00:18:34.175 }' 00:18:34.175 13:02:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.175 13:02:38 -- common/autotest_common.sh@10 -- # set +x 00:18:34.743 13:02:38 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:34.743 13:02:38 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.002 [2024-04-17 13:02:39.115750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.002 [2024-04-17 13:02:39.115883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.002 [2024-04-17 13:02:39.115934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:35.002 [2024-04-17 13:02:39.115964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.002 [2024-04-17 13:02:39.116470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.002 [2024-04-17 13:02:39.116520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.002 [2024-04-17 13:02:39.116646] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:35.002 [2024-04-17 13:02:39.116685] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.002 pt2 00:18:35.002 13:02:39 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:35.261 [2024-04-17 13:02:39.371856] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:35.261 13:02:39 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:35.261 13:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:35.261 13:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.261 13:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.261 13:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.262 13:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.520 13:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.520 "name": "raid_bdev1", 00:18:35.520 "uuid": "63e7ec9f-d03a-480a-ae73-f0d1d9101a3e", 00:18:35.520 "strip_size_kb": 64, 00:18:35.520 "state": "configuring", 00:18:35.520 "raid_level": "raid0", 00:18:35.520 "superblock": true, 00:18:35.520 "num_base_bdevs": 3, 00:18:35.520 "num_base_bdevs_discovered": 1, 00:18:35.520 "num_base_bdevs_operational": 3, 00:18:35.520 "base_bdevs_list": [ 00:18:35.520 { 00:18:35.520 "name": "pt1", 00:18:35.520 "uuid": "71702deb-c84e-589c-9c72-6070758ed28a", 00:18:35.520 "is_configured": true, 00:18:35.520 "data_offset": 2048, 00:18:35.520 "data_size": 63488 00:18:35.520 }, 00:18:35.520 { 00:18:35.520 "name": null, 00:18:35.520 "uuid": "7ad1add6-36c7-5366-a8c7-b679f8fac3df", 00:18:35.520 "is_configured": false, 00:18:35.520 "data_offset": 2048, 00:18:35.520 "data_size": 63488 00:18:35.520 }, 00:18:35.520 { 00:18:35.520 "name": null, 00:18:35.520 "uuid": "568ec260-dfc3-5458-9a34-f73f33b41f6f", 00:18:35.520 "is_configured": false, 00:18:35.520 "data_offset": 2048, 00:18:35.520 "data_size": 63488 00:18:35.520 } 00:18:35.520 ] 00:18:35.520 }' 00:18:35.520 13:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.520 13:02:39 -- common/autotest_common.sh@10 -- # set +x 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.456 [2024-04-17 13:02:40.572058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.456 [2024-04-17 13:02:40.572162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.456 [2024-04-17 13:02:40.572207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:36.456 [2024-04-17 13:02:40.572244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.456 [2024-04-17 13:02:40.572746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.456 [2024-04-17 13:02:40.572784] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.456 [2024-04-17 13:02:40.572905] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:36.456 [2024-04-17 13:02:40.572934] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.456 pt2 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:36.456 13:02:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:36.714 [2024-04-17 13:02:40.788128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:36.714 [2024-04-17 13:02:40.788212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.714 [2024-04-17 13:02:40.788252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:36.714 [2024-04-17 13:02:40.788281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.714 [2024-04-17 13:02:40.788770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.714 [2024-04-17 13:02:40.788819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:36.714 [2024-04-17 13:02:40.788951] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:36.714 [2024-04-17 13:02:40.788981] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:36.715 [2024-04-17 13:02:40.789113] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:18:36.715 [2024-04-17 13:02:40.789126] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:36.715 [2024-04-17 13:02:40.789251] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:36.715 [2024-04-17 13:02:40.789584] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:18:36.715 [2024-04-17 13:02:40.789599] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:18:36.715 [2024-04-17 13:02:40.789742] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.715 pt3 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.715 13:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.973 13:02:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.973 "name": "raid_bdev1", 00:18:36.973 "uuid": "63e7ec9f-d03a-480a-ae73-f0d1d9101a3e", 00:18:36.973 "strip_size_kb": 64, 00:18:36.973 "state": "online", 00:18:36.973 "raid_level": "raid0", 00:18:36.973 "superblock": true, 00:18:36.973 "num_base_bdevs": 3, 00:18:36.973 "num_base_bdevs_discovered": 3, 00:18:36.973 "num_base_bdevs_operational": 3, 00:18:36.973 "base_bdevs_list": [ 00:18:36.973 { 00:18:36.973 "name": "pt1", 00:18:36.973 "uuid": "71702deb-c84e-589c-9c72-6070758ed28a", 00:18:36.973 "is_configured": true, 00:18:36.973 "data_offset": 2048, 00:18:36.973 "data_size": 63488 00:18:36.973 }, 00:18:36.973 { 00:18:36.973 "name": "pt2", 00:18:36.973 "uuid": "7ad1add6-36c7-5366-a8c7-b679f8fac3df", 00:18:36.973 "is_configured": true, 00:18:36.973 "data_offset": 2048, 00:18:36.973 "data_size": 63488 00:18:36.973 }, 00:18:36.973 { 00:18:36.973 "name": "pt3", 00:18:36.973 "uuid": "568ec260-dfc3-5458-9a34-f73f33b41f6f", 00:18:36.973 "is_configured": true, 00:18:36.973 "data_offset": 2048, 00:18:36.973 "data_size": 63488 00:18:36.973 } 00:18:36.973 ] 00:18:36.973 }' 00:18:36.973 13:02:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.973 13:02:41 -- common/autotest_common.sh@10 -- # set +x 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:37.909 [2024-04-17 13:02:41.972757] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@430 -- # '[' 63e7ec9f-d03a-480a-ae73-f0d1d9101a3e '!=' 63e7ec9f-d03a-480a-ae73-f0d1d9101a3e ']' 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:37.909 13:02:41 -- bdev/bdev_raid.sh@511 -- # killprocess 123068 00:18:37.909 13:02:41 -- common/autotest_common.sh@924 -- # '[' -z 123068 ']' 00:18:37.909 13:02:41 -- common/autotest_common.sh@928 -- # kill -0 123068 00:18:37.909 13:02:41 -- common/autotest_common.sh@929 -- # uname 00:18:37.910 13:02:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:37.910 13:02:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 123068 00:18:37.910 killing process with pid 123068 00:18:37.910 13:02:42 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:37.910 13:02:42 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:37.910 13:02:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 123068' 00:18:37.910 13:02:42 -- common/autotest_common.sh@943 -- # kill 123068 00:18:37.910 13:02:42 -- common/autotest_common.sh@948 -- # wait 123068 00:18:37.910 [2024-04-17 13:02:42.004975] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.910 [2024-04-17 13:02:42.005061] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.910 [2024-04-17 13:02:42.005131] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.910 [2024-04-17 13:02:42.005153] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:38.168 [2024-04-17 13:02:42.255255] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.545 ************************************ 00:18:39.545 END TEST raid_superblock_test 00:18:39.545 ************************************ 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:39.545 00:18:39.545 real 0m11.554s 00:18:39.545 user 0m20.260s 00:18:39.545 sys 0m1.307s 00:18:39.545 13:02:43 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:18:39.545 13:02:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:39.545 13:02:43 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:39.545 13:02:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:39.545 13:02:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.545 ************************************ 00:18:39.545 START TEST raid_state_function_test 00:18:39.545 ************************************ 00:18:39.545 13:02:43 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 3 false 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=123409 00:18:39.545 13:02:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:39.545 Process raid pid: 123409 00:18:39.546 13:02:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123409' 00:18:39.546 13:02:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123409 /var/tmp/spdk-raid.sock 00:18:39.546 13:02:43 -- common/autotest_common.sh@817 -- # '[' -z 123409 ']' 00:18:39.546 13:02:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.546 13:02:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:39.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.546 13:02:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.546 13:02:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:39.546 13:02:43 -- common/autotest_common.sh@10 -- # set +x 00:18:39.546 [2024-04-17 13:02:43.498990] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:18:39.546 [2024-04-17 13:02:43.499160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.546 [2024-04-17 13:02:43.661380] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.804 [2024-04-17 13:02:43.898271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.063 [2024-04-17 13:02:44.096979] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.629 13:02:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:40.629 13:02:44 -- common/autotest_common.sh@850 -- # return 0 00:18:40.629 13:02:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.888 [2024-04-17 13:02:44.781684] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.888 [2024-04-17 13:02:44.781784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.888 [2024-04-17 13:02:44.781799] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.888 [2024-04-17 13:02:44.781819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.888 [2024-04-17 13:02:44.781827] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.888 [2024-04-17 13:02:44.781870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.888 13:02:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.147 13:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.147 "name": "Existed_Raid", 00:18:41.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.147 "strip_size_kb": 64, 00:18:41.147 "state": "configuring", 00:18:41.147 "raid_level": "concat", 00:18:41.147 "superblock": false, 00:18:41.147 "num_base_bdevs": 3, 00:18:41.147 "num_base_bdevs_discovered": 0, 00:18:41.147 "num_base_bdevs_operational": 3, 00:18:41.147 "base_bdevs_list": [ 00:18:41.147 { 00:18:41.147 "name": "BaseBdev1", 00:18:41.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.147 "is_configured": false, 00:18:41.147 "data_offset": 0, 00:18:41.147 "data_size": 0 00:18:41.147 }, 00:18:41.147 { 00:18:41.147 "name": "BaseBdev2", 00:18:41.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.147 "is_configured": false, 00:18:41.147 "data_offset": 0, 00:18:41.147 "data_size": 0 00:18:41.147 }, 00:18:41.147 { 00:18:41.147 "name": "BaseBdev3", 00:18:41.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.147 "is_configured": false, 00:18:41.147 "data_offset": 0, 00:18:41.147 "data_size": 0 00:18:41.147 } 00:18:41.147 ] 00:18:41.147 }' 00:18:41.147 13:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.147 13:02:45 -- common/autotest_common.sh@10 -- # set +x 00:18:41.714 13:02:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:41.972 [2024-04-17 13:02:46.045822] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.972 [2024-04-17 13:02:46.045878] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:41.972 13:02:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:42.231 [2024-04-17 13:02:46.321912] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.231 [2024-04-17 13:02:46.322009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.231 [2024-04-17 13:02:46.322030] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.231 [2024-04-17 13:02:46.322071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.231 [2024-04-17 13:02:46.322086] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.231 [2024-04-17 13:02:46.322137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.231 13:02:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.489 [2024-04-17 13:02:46.589580] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.489 BaseBdev1 00:18:42.489 13:02:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:42.489 13:02:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:42.489 13:02:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:42.489 13:02:46 -- common/autotest_common.sh@887 -- # local i 00:18:42.489 13:02:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:42.489 13:02:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:42.489 13:02:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.748 13:02:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.007 [ 00:18:43.007 { 00:18:43.007 "name": "BaseBdev1", 00:18:43.007 "aliases": [ 00:18:43.007 "cc303d1b-aaef-42ab-ac75-02eceafd5cac" 00:18:43.007 ], 00:18:43.007 "product_name": "Malloc disk", 00:18:43.007 "block_size": 512, 00:18:43.007 "num_blocks": 65536, 00:18:43.007 "uuid": "cc303d1b-aaef-42ab-ac75-02eceafd5cac", 00:18:43.007 "assigned_rate_limits": { 00:18:43.007 "rw_ios_per_sec": 0, 00:18:43.007 "rw_mbytes_per_sec": 0, 00:18:43.007 "r_mbytes_per_sec": 0, 00:18:43.007 "w_mbytes_per_sec": 0 00:18:43.007 }, 00:18:43.007 "claimed": true, 00:18:43.007 "claim_type": "exclusive_write", 00:18:43.007 "zoned": false, 00:18:43.007 "supported_io_types": { 00:18:43.007 "read": true, 00:18:43.007 "write": true, 00:18:43.007 "unmap": true, 00:18:43.007 "write_zeroes": true, 00:18:43.007 "flush": true, 00:18:43.007 "reset": true, 00:18:43.007 "compare": false, 00:18:43.007 "compare_and_write": false, 00:18:43.007 "abort": true, 00:18:43.007 "nvme_admin": false, 00:18:43.007 "nvme_io": false 00:18:43.007 }, 00:18:43.007 "memory_domains": [ 00:18:43.007 { 00:18:43.007 "dma_device_id": "system", 00:18:43.007 "dma_device_type": 1 00:18:43.007 }, 00:18:43.007 { 00:18:43.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.007 "dma_device_type": 2 00:18:43.007 } 00:18:43.007 ], 00:18:43.007 "driver_specific": {} 00:18:43.007 } 00:18:43.007 ] 00:18:43.007 13:02:47 -- common/autotest_common.sh@893 -- # return 0 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.007 13:02:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.266 13:02:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.266 "name": "Existed_Raid", 00:18:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.266 "strip_size_kb": 64, 00:18:43.266 "state": "configuring", 00:18:43.266 "raid_level": "concat", 00:18:43.266 "superblock": false, 00:18:43.266 "num_base_bdevs": 3, 00:18:43.266 "num_base_bdevs_discovered": 1, 00:18:43.266 "num_base_bdevs_operational": 3, 00:18:43.266 "base_bdevs_list": [ 00:18:43.266 { 00:18:43.266 "name": "BaseBdev1", 00:18:43.266 "uuid": "cc303d1b-aaef-42ab-ac75-02eceafd5cac", 00:18:43.266 "is_configured": true, 00:18:43.266 "data_offset": 0, 00:18:43.266 "data_size": 65536 00:18:43.266 }, 00:18:43.266 { 00:18:43.266 "name": "BaseBdev2", 00:18:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.266 "is_configured": false, 00:18:43.266 "data_offset": 0, 00:18:43.266 "data_size": 0 00:18:43.266 }, 00:18:43.266 { 00:18:43.266 "name": "BaseBdev3", 00:18:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.266 "is_configured": false, 00:18:43.266 "data_offset": 0, 00:18:43.266 "data_size": 0 00:18:43.266 } 00:18:43.266 ] 00:18:43.266 }' 00:18:43.266 13:02:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.266 13:02:47 -- common/autotest_common.sh@10 -- # set +x 00:18:44.202 13:02:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.202 [2024-04-17 13:02:48.242032] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.202 [2024-04-17 13:02:48.242117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:44.202 13:02:48 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:44.202 13:02:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:44.459 [2024-04-17 13:02:48.538175] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.459 [2024-04-17 13:02:48.540433] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:44.459 [2024-04-17 13:02:48.540505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:44.459 [2024-04-17 13:02:48.540525] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:44.459 [2024-04-17 13:02:48.540574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:44.459 13:02:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:44.459 13:02:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:44.459 13:02:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:44.459 13:02:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.460 13:02:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.717 13:02:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.717 "name": "Existed_Raid", 00:18:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.717 "strip_size_kb": 64, 00:18:44.717 "state": "configuring", 00:18:44.717 "raid_level": "concat", 00:18:44.717 "superblock": false, 00:18:44.717 "num_base_bdevs": 3, 00:18:44.717 "num_base_bdevs_discovered": 1, 00:18:44.717 "num_base_bdevs_operational": 3, 00:18:44.717 "base_bdevs_list": [ 00:18:44.717 { 00:18:44.717 "name": "BaseBdev1", 00:18:44.717 "uuid": "cc303d1b-aaef-42ab-ac75-02eceafd5cac", 00:18:44.717 "is_configured": true, 00:18:44.717 "data_offset": 0, 00:18:44.717 "data_size": 65536 00:18:44.717 }, 00:18:44.717 { 00:18:44.717 "name": "BaseBdev2", 00:18:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.717 "is_configured": false, 00:18:44.717 "data_offset": 0, 00:18:44.717 "data_size": 0 00:18:44.717 }, 00:18:44.717 { 00:18:44.717 "name": "BaseBdev3", 00:18:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.717 "is_configured": false, 00:18:44.717 "data_offset": 0, 00:18:44.717 "data_size": 0 00:18:44.717 } 00:18:44.717 ] 00:18:44.717 }' 00:18:44.717 13:02:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.717 13:02:48 -- common/autotest_common.sh@10 -- # set +x 00:18:45.649 13:02:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:45.649 [2024-04-17 13:02:49.740882] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.649 BaseBdev2 00:18:45.649 13:02:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:45.649 13:02:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:45.649 13:02:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:45.649 13:02:49 -- common/autotest_common.sh@887 -- # local i 00:18:45.649 13:02:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:45.649 13:02:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:45.649 13:02:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.907 13:02:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:46.166 [ 00:18:46.166 { 00:18:46.166 "name": "BaseBdev2", 00:18:46.166 "aliases": [ 00:18:46.166 "5f02e714-1691-4dfd-8ae3-d365bcff5272" 00:18:46.166 ], 00:18:46.166 "product_name": "Malloc disk", 00:18:46.166 "block_size": 512, 00:18:46.166 "num_blocks": 65536, 00:18:46.166 "uuid": "5f02e714-1691-4dfd-8ae3-d365bcff5272", 00:18:46.166 "assigned_rate_limits": { 00:18:46.166 "rw_ios_per_sec": 0, 00:18:46.166 "rw_mbytes_per_sec": 0, 00:18:46.166 "r_mbytes_per_sec": 0, 00:18:46.166 "w_mbytes_per_sec": 0 00:18:46.166 }, 00:18:46.166 "claimed": true, 00:18:46.166 "claim_type": "exclusive_write", 00:18:46.166 "zoned": false, 00:18:46.166 "supported_io_types": { 00:18:46.166 "read": true, 00:18:46.166 "write": true, 00:18:46.166 "unmap": true, 00:18:46.166 "write_zeroes": true, 00:18:46.166 "flush": true, 00:18:46.166 "reset": true, 00:18:46.166 "compare": false, 00:18:46.166 "compare_and_write": false, 00:18:46.166 "abort": true, 00:18:46.166 "nvme_admin": false, 00:18:46.166 "nvme_io": false 00:18:46.166 }, 00:18:46.166 "memory_domains": [ 00:18:46.166 { 00:18:46.166 "dma_device_id": "system", 00:18:46.166 "dma_device_type": 1 00:18:46.166 }, 00:18:46.166 { 00:18:46.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.166 "dma_device_type": 2 00:18:46.166 } 00:18:46.166 ], 00:18:46.166 "driver_specific": {} 00:18:46.166 } 00:18:46.166 ] 00:18:46.166 13:02:50 -- common/autotest_common.sh@893 -- # return 0 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.166 13:02:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.424 13:02:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.424 "name": "Existed_Raid", 00:18:46.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.424 "strip_size_kb": 64, 00:18:46.424 "state": "configuring", 00:18:46.424 "raid_level": "concat", 00:18:46.424 "superblock": false, 00:18:46.424 "num_base_bdevs": 3, 00:18:46.424 "num_base_bdevs_discovered": 2, 00:18:46.424 "num_base_bdevs_operational": 3, 00:18:46.424 "base_bdevs_list": [ 00:18:46.424 { 00:18:46.424 "name": "BaseBdev1", 00:18:46.424 "uuid": "cc303d1b-aaef-42ab-ac75-02eceafd5cac", 00:18:46.424 "is_configured": true, 00:18:46.424 "data_offset": 0, 00:18:46.424 "data_size": 65536 00:18:46.424 }, 00:18:46.424 { 00:18:46.424 "name": "BaseBdev2", 00:18:46.424 "uuid": "5f02e714-1691-4dfd-8ae3-d365bcff5272", 00:18:46.424 "is_configured": true, 00:18:46.424 "data_offset": 0, 00:18:46.424 "data_size": 65536 00:18:46.424 }, 00:18:46.424 { 00:18:46.424 "name": "BaseBdev3", 00:18:46.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.425 "is_configured": false, 00:18:46.425 "data_offset": 0, 00:18:46.425 "data_size": 0 00:18:46.425 } 00:18:46.425 ] 00:18:46.425 }' 00:18:46.425 13:02:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.425 13:02:50 -- common/autotest_common.sh@10 -- # set +x 00:18:47.396 13:02:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:47.396 [2024-04-17 13:02:51.478642] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.396 [2024-04-17 13:02:51.478699] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:47.396 [2024-04-17 13:02:51.478710] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:47.396 [2024-04-17 13:02:51.478845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:18:47.396 [2024-04-17 13:02:51.479254] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:47.396 [2024-04-17 13:02:51.479279] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:47.396 [2024-04-17 13:02:51.479557] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.396 BaseBdev3 00:18:47.396 13:02:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:47.396 13:02:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:47.396 13:02:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:47.396 13:02:51 -- common/autotest_common.sh@887 -- # local i 00:18:47.396 13:02:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:47.396 13:02:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:47.396 13:02:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.654 13:02:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:47.912 [ 00:18:47.912 { 00:18:47.912 "name": "BaseBdev3", 00:18:47.912 "aliases": [ 00:18:47.912 "e47d81da-b863-4ac3-b194-f3acef3d7b7f" 00:18:47.912 ], 00:18:47.912 "product_name": "Malloc disk", 00:18:47.912 "block_size": 512, 00:18:47.912 "num_blocks": 65536, 00:18:47.912 "uuid": "e47d81da-b863-4ac3-b194-f3acef3d7b7f", 00:18:47.912 "assigned_rate_limits": { 00:18:47.912 "rw_ios_per_sec": 0, 00:18:47.912 "rw_mbytes_per_sec": 0, 00:18:47.912 "r_mbytes_per_sec": 0, 00:18:47.912 "w_mbytes_per_sec": 0 00:18:47.912 }, 00:18:47.912 "claimed": true, 00:18:47.912 "claim_type": "exclusive_write", 00:18:47.912 "zoned": false, 00:18:47.912 "supported_io_types": { 00:18:47.912 "read": true, 00:18:47.912 "write": true, 00:18:47.912 "unmap": true, 00:18:47.912 "write_zeroes": true, 00:18:47.912 "flush": true, 00:18:47.912 "reset": true, 00:18:47.912 "compare": false, 00:18:47.912 "compare_and_write": false, 00:18:47.912 "abort": true, 00:18:47.912 "nvme_admin": false, 00:18:47.912 "nvme_io": false 00:18:47.912 }, 00:18:47.912 "memory_domains": [ 00:18:47.912 { 00:18:47.912 "dma_device_id": "system", 00:18:47.912 "dma_device_type": 1 00:18:47.912 }, 00:18:47.912 { 00:18:47.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.912 "dma_device_type": 2 00:18:47.912 } 00:18:47.912 ], 00:18:47.912 "driver_specific": {} 00:18:47.912 } 00:18:47.912 ] 00:18:47.912 13:02:52 -- common/autotest_common.sh@893 -- # return 0 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.912 13:02:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.171 13:02:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.171 "name": "Existed_Raid", 00:18:48.171 "uuid": "9f25129a-69a4-4aa7-bfe8-ffe53e8a3ef0", 00:18:48.171 "strip_size_kb": 64, 00:18:48.171 "state": "online", 00:18:48.171 "raid_level": "concat", 00:18:48.171 "superblock": false, 00:18:48.171 "num_base_bdevs": 3, 00:18:48.171 "num_base_bdevs_discovered": 3, 00:18:48.171 "num_base_bdevs_operational": 3, 00:18:48.171 "base_bdevs_list": [ 00:18:48.171 { 00:18:48.171 "name": "BaseBdev1", 00:18:48.171 "uuid": "cc303d1b-aaef-42ab-ac75-02eceafd5cac", 00:18:48.171 "is_configured": true, 00:18:48.171 "data_offset": 0, 00:18:48.171 "data_size": 65536 00:18:48.171 }, 00:18:48.171 { 00:18:48.171 "name": "BaseBdev2", 00:18:48.171 "uuid": "5f02e714-1691-4dfd-8ae3-d365bcff5272", 00:18:48.171 "is_configured": true, 00:18:48.171 "data_offset": 0, 00:18:48.171 "data_size": 65536 00:18:48.171 }, 00:18:48.171 { 00:18:48.171 "name": "BaseBdev3", 00:18:48.171 "uuid": "e47d81da-b863-4ac3-b194-f3acef3d7b7f", 00:18:48.171 "is_configured": true, 00:18:48.171 "data_offset": 0, 00:18:48.171 "data_size": 65536 00:18:48.171 } 00:18:48.171 ] 00:18:48.171 }' 00:18:48.429 13:02:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.429 13:02:52 -- common/autotest_common.sh@10 -- # set +x 00:18:48.995 13:02:52 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.253 [2024-04-17 13:02:53.223214] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.253 [2024-04-17 13:02:53.223276] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.253 [2024-04-17 13:02:53.223391] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.253 13:02:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.511 13:02:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.511 "name": "Existed_Raid", 00:18:49.511 "uuid": "9f25129a-69a4-4aa7-bfe8-ffe53e8a3ef0", 00:18:49.511 "strip_size_kb": 64, 00:18:49.511 "state": "offline", 00:18:49.511 "raid_level": "concat", 00:18:49.511 "superblock": false, 00:18:49.511 "num_base_bdevs": 3, 00:18:49.511 "num_base_bdevs_discovered": 2, 00:18:49.511 "num_base_bdevs_operational": 2, 00:18:49.511 "base_bdevs_list": [ 00:18:49.511 { 00:18:49.512 "name": null, 00:18:49.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.512 "is_configured": false, 00:18:49.512 "data_offset": 0, 00:18:49.512 "data_size": 65536 00:18:49.512 }, 00:18:49.512 { 00:18:49.512 "name": "BaseBdev2", 00:18:49.512 "uuid": "5f02e714-1691-4dfd-8ae3-d365bcff5272", 00:18:49.512 "is_configured": true, 00:18:49.512 "data_offset": 0, 00:18:49.512 "data_size": 65536 00:18:49.512 }, 00:18:49.512 { 00:18:49.512 "name": "BaseBdev3", 00:18:49.512 "uuid": "e47d81da-b863-4ac3-b194-f3acef3d7b7f", 00:18:49.512 "is_configured": true, 00:18:49.512 "data_offset": 0, 00:18:49.512 "data_size": 65536 00:18:49.512 } 00:18:49.512 ] 00:18:49.512 }' 00:18:49.512 13:02:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.512 13:02:53 -- common/autotest_common.sh@10 -- # set +x 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.446 13:02:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:50.751 [2024-04-17 13:02:54.730390] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:50.751 13:02:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:50.751 13:02:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:50.751 13:02:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.751 13:02:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:51.017 13:02:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:51.017 13:02:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:51.017 13:02:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:51.276 [2024-04-17 13:02:55.278382] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:51.276 [2024-04-17 13:02:55.278660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:51.276 13:02:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:51.276 13:02:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:51.276 13:02:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.276 13:02:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.534 13:02:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:51.534 13:02:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:51.534 13:02:55 -- bdev/bdev_raid.sh@287 -- # killprocess 123409 00:18:51.534 13:02:55 -- common/autotest_common.sh@924 -- # '[' -z 123409 ']' 00:18:51.534 13:02:55 -- common/autotest_common.sh@928 -- # kill -0 123409 00:18:51.534 13:02:55 -- common/autotest_common.sh@929 -- # uname 00:18:51.534 13:02:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:51.534 13:02:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 123409 00:18:51.534 killing process with pid 123409 00:18:51.534 13:02:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:51.534 13:02:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:51.534 13:02:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 123409' 00:18:51.534 13:02:55 -- common/autotest_common.sh@943 -- # kill 123409 00:18:51.534 13:02:55 -- common/autotest_common.sh@948 -- # wait 123409 00:18:51.534 [2024-04-17 13:02:55.619800] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.534 [2024-04-17 13:02:55.619941] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.910 ************************************ 00:18:52.910 END TEST raid_state_function_test 00:18:52.910 ************************************ 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:52.910 00:18:52.910 real 0m13.335s 00:18:52.910 user 0m23.777s 00:18:52.910 sys 0m1.443s 00:18:52.910 13:02:56 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:18:52.910 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:52.910 13:02:56 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:52.910 13:02:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:52.910 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 ************************************ 00:18:52.910 START TEST raid_state_function_test_sb 00:18:52.910 ************************************ 00:18:52.910 13:02:56 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 3 true 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=123834 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123834' 00:18:52.910 Process raid pid: 123834 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:52.910 13:02:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123834 /var/tmp/spdk-raid.sock 00:18:52.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:52.910 13:02:56 -- common/autotest_common.sh@817 -- # '[' -z 123834 ']' 00:18:52.910 13:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:52.910 13:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.910 13:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:52.910 13:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.910 13:02:56 -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 [2024-04-17 13:02:56.923286] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:18:52.910 [2024-04-17 13:02:56.923715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.169 [2024-04-17 13:02:57.093645] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.169 [2024-04-17 13:02:57.308303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.427 [2024-04-17 13:02:57.513383] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.993 13:02:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.993 13:02:57 -- common/autotest_common.sh@850 -- # return 0 00:18:53.993 13:02:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:54.251 [2024-04-17 13:02:58.142261] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.251 [2024-04-17 13:02:58.142539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.251 [2024-04-17 13:02:58.142705] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.251 [2024-04-17 13:02:58.142788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.251 [2024-04-17 13:02:58.143042] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.251 [2024-04-17 13:02:58.143160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.251 13:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.509 13:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.509 "name": "Existed_Raid", 00:18:54.509 "uuid": "ab56f01e-1497-4709-9665-d4d9a3376811", 00:18:54.509 "strip_size_kb": 64, 00:18:54.509 "state": "configuring", 00:18:54.509 "raid_level": "concat", 00:18:54.509 "superblock": true, 00:18:54.509 "num_base_bdevs": 3, 00:18:54.509 "num_base_bdevs_discovered": 0, 00:18:54.509 "num_base_bdevs_operational": 3, 00:18:54.509 "base_bdevs_list": [ 00:18:54.509 { 00:18:54.509 "name": "BaseBdev1", 00:18:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.509 "is_configured": false, 00:18:54.509 "data_offset": 0, 00:18:54.509 "data_size": 0 00:18:54.509 }, 00:18:54.509 { 00:18:54.509 "name": "BaseBdev2", 00:18:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.509 "is_configured": false, 00:18:54.509 "data_offset": 0, 00:18:54.509 "data_size": 0 00:18:54.509 }, 00:18:54.509 { 00:18:54.509 "name": "BaseBdev3", 00:18:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.509 "is_configured": false, 00:18:54.509 "data_offset": 0, 00:18:54.509 "data_size": 0 00:18:54.509 } 00:18:54.509 ] 00:18:54.509 }' 00:18:54.509 13:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.509 13:02:58 -- common/autotest_common.sh@10 -- # set +x 00:18:55.075 13:02:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:55.335 [2024-04-17 13:02:59.310398] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.335 [2024-04-17 13:02:59.310655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:55.335 13:02:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:55.594 [2024-04-17 13:02:59.602494] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.594 [2024-04-17 13:02:59.602830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.594 [2024-04-17 13:02:59.602992] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.594 [2024-04-17 13:02:59.603157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.594 [2024-04-17 13:02:59.603283] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.594 [2024-04-17 13:02:59.603473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.594 13:02:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:55.852 [2024-04-17 13:02:59.883679] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.852 BaseBdev1 00:18:55.852 13:02:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:55.852 13:02:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:55.852 13:02:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:55.852 13:02:59 -- common/autotest_common.sh@887 -- # local i 00:18:55.852 13:02:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:55.852 13:02:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:55.852 13:02:59 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.110 13:03:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:56.369 [ 00:18:56.369 { 00:18:56.369 "name": "BaseBdev1", 00:18:56.369 "aliases": [ 00:18:56.369 "176f99d3-34c6-4582-acba-c1eb77fef691" 00:18:56.369 ], 00:18:56.369 "product_name": "Malloc disk", 00:18:56.369 "block_size": 512, 00:18:56.369 "num_blocks": 65536, 00:18:56.369 "uuid": "176f99d3-34c6-4582-acba-c1eb77fef691", 00:18:56.369 "assigned_rate_limits": { 00:18:56.369 "rw_ios_per_sec": 0, 00:18:56.369 "rw_mbytes_per_sec": 0, 00:18:56.369 "r_mbytes_per_sec": 0, 00:18:56.369 "w_mbytes_per_sec": 0 00:18:56.369 }, 00:18:56.369 "claimed": true, 00:18:56.369 "claim_type": "exclusive_write", 00:18:56.369 "zoned": false, 00:18:56.369 "supported_io_types": { 00:18:56.369 "read": true, 00:18:56.369 "write": true, 00:18:56.369 "unmap": true, 00:18:56.369 "write_zeroes": true, 00:18:56.369 "flush": true, 00:18:56.369 "reset": true, 00:18:56.369 "compare": false, 00:18:56.369 "compare_and_write": false, 00:18:56.369 "abort": true, 00:18:56.369 "nvme_admin": false, 00:18:56.369 "nvme_io": false 00:18:56.369 }, 00:18:56.369 "memory_domains": [ 00:18:56.369 { 00:18:56.369 "dma_device_id": "system", 00:18:56.369 "dma_device_type": 1 00:18:56.369 }, 00:18:56.369 { 00:18:56.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.369 "dma_device_type": 2 00:18:56.369 } 00:18:56.369 ], 00:18:56.369 "driver_specific": {} 00:18:56.369 } 00:18:56.369 ] 00:18:56.369 13:03:00 -- common/autotest_common.sh@893 -- # return 0 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.369 13:03:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.627 13:03:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.627 "name": "Existed_Raid", 00:18:56.627 "uuid": "f6963fa7-dea9-47a5-9624-5e74763db382", 00:18:56.627 "strip_size_kb": 64, 00:18:56.627 "state": "configuring", 00:18:56.627 "raid_level": "concat", 00:18:56.627 "superblock": true, 00:18:56.627 "num_base_bdevs": 3, 00:18:56.627 "num_base_bdevs_discovered": 1, 00:18:56.627 "num_base_bdevs_operational": 3, 00:18:56.627 "base_bdevs_list": [ 00:18:56.627 { 00:18:56.627 "name": "BaseBdev1", 00:18:56.627 "uuid": "176f99d3-34c6-4582-acba-c1eb77fef691", 00:18:56.627 "is_configured": true, 00:18:56.627 "data_offset": 2048, 00:18:56.627 "data_size": 63488 00:18:56.627 }, 00:18:56.627 { 00:18:56.627 "name": "BaseBdev2", 00:18:56.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.627 "is_configured": false, 00:18:56.627 "data_offset": 0, 00:18:56.627 "data_size": 0 00:18:56.627 }, 00:18:56.627 { 00:18:56.627 "name": "BaseBdev3", 00:18:56.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.627 "is_configured": false, 00:18:56.627 "data_offset": 0, 00:18:56.627 "data_size": 0 00:18:56.627 } 00:18:56.627 ] 00:18:56.627 }' 00:18:56.627 13:03:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.627 13:03:00 -- common/autotest_common.sh@10 -- # set +x 00:18:57.561 13:03:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:57.561 [2024-04-17 13:03:01.640231] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.561 [2024-04-17 13:03:01.640509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:57.561 13:03:01 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:57.561 13:03:01 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:57.820 13:03:01 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:58.385 BaseBdev1 00:18:58.385 13:03:02 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:58.385 13:03:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:58.385 13:03:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:58.385 13:03:02 -- common/autotest_common.sh@887 -- # local i 00:18:58.385 13:03:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:58.385 13:03:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:58.385 13:03:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.644 13:03:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.644 [ 00:18:58.644 { 00:18:58.644 "name": "BaseBdev1", 00:18:58.644 "aliases": [ 00:18:58.644 "73734561-48af-4fb7-86aa-e2868673d185" 00:18:58.644 ], 00:18:58.644 "product_name": "Malloc disk", 00:18:58.644 "block_size": 512, 00:18:58.644 "num_blocks": 65536, 00:18:58.644 "uuid": "73734561-48af-4fb7-86aa-e2868673d185", 00:18:58.644 "assigned_rate_limits": { 00:18:58.644 "rw_ios_per_sec": 0, 00:18:58.644 "rw_mbytes_per_sec": 0, 00:18:58.644 "r_mbytes_per_sec": 0, 00:18:58.644 "w_mbytes_per_sec": 0 00:18:58.644 }, 00:18:58.644 "claimed": false, 00:18:58.644 "zoned": false, 00:18:58.644 "supported_io_types": { 00:18:58.644 "read": true, 00:18:58.644 "write": true, 00:18:58.644 "unmap": true, 00:18:58.644 "write_zeroes": true, 00:18:58.644 "flush": true, 00:18:58.644 "reset": true, 00:18:58.644 "compare": false, 00:18:58.644 "compare_and_write": false, 00:18:58.644 "abort": true, 00:18:58.644 "nvme_admin": false, 00:18:58.644 "nvme_io": false 00:18:58.644 }, 00:18:58.644 "memory_domains": [ 00:18:58.644 { 00:18:58.644 "dma_device_id": "system", 00:18:58.644 "dma_device_type": 1 00:18:58.644 }, 00:18:58.644 { 00:18:58.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.644 "dma_device_type": 2 00:18:58.644 } 00:18:58.644 ], 00:18:58.644 "driver_specific": {} 00:18:58.644 } 00:18:58.644 ] 00:18:58.644 13:03:02 -- common/autotest_common.sh@893 -- # return 0 00:18:58.645 13:03:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:58.903 [2024-04-17 13:03:02.967774] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.904 [2024-04-17 13:03:02.970122] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.904 [2024-04-17 13:03:02.970293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.904 [2024-04-17 13:03:02.970481] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.904 [2024-04-17 13:03:02.970649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.904 13:03:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.163 13:03:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.163 "name": "Existed_Raid", 00:18:59.163 "uuid": "9feaaec8-6dd8-471c-99ed-6183fca495c7", 00:18:59.163 "strip_size_kb": 64, 00:18:59.163 "state": "configuring", 00:18:59.163 "raid_level": "concat", 00:18:59.163 "superblock": true, 00:18:59.163 "num_base_bdevs": 3, 00:18:59.163 "num_base_bdevs_discovered": 1, 00:18:59.163 "num_base_bdevs_operational": 3, 00:18:59.163 "base_bdevs_list": [ 00:18:59.163 { 00:18:59.163 "name": "BaseBdev1", 00:18:59.163 "uuid": "73734561-48af-4fb7-86aa-e2868673d185", 00:18:59.163 "is_configured": true, 00:18:59.163 "data_offset": 2048, 00:18:59.163 "data_size": 63488 00:18:59.163 }, 00:18:59.163 { 00:18:59.163 "name": "BaseBdev2", 00:18:59.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.163 "is_configured": false, 00:18:59.163 "data_offset": 0, 00:18:59.163 "data_size": 0 00:18:59.163 }, 00:18:59.163 { 00:18:59.163 "name": "BaseBdev3", 00:18:59.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.163 "is_configured": false, 00:18:59.163 "data_offset": 0, 00:18:59.163 "data_size": 0 00:18:59.163 } 00:18:59.163 ] 00:18:59.163 }' 00:18:59.163 13:03:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.163 13:03:03 -- common/autotest_common.sh@10 -- # set +x 00:19:00.099 13:03:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.099 [2024-04-17 13:03:04.174680] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.099 BaseBdev2 00:19:00.099 13:03:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:00.099 13:03:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:00.099 13:03:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:00.099 13:03:04 -- common/autotest_common.sh@887 -- # local i 00:19:00.099 13:03:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:00.099 13:03:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:00.099 13:03:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.357 13:03:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.615 [ 00:19:00.615 { 00:19:00.615 "name": "BaseBdev2", 00:19:00.615 "aliases": [ 00:19:00.615 "fb241f7d-2a03-4a04-8167-bdb51788a5f1" 00:19:00.615 ], 00:19:00.615 "product_name": "Malloc disk", 00:19:00.615 "block_size": 512, 00:19:00.615 "num_blocks": 65536, 00:19:00.615 "uuid": "fb241f7d-2a03-4a04-8167-bdb51788a5f1", 00:19:00.615 "assigned_rate_limits": { 00:19:00.615 "rw_ios_per_sec": 0, 00:19:00.615 "rw_mbytes_per_sec": 0, 00:19:00.615 "r_mbytes_per_sec": 0, 00:19:00.615 "w_mbytes_per_sec": 0 00:19:00.615 }, 00:19:00.615 "claimed": true, 00:19:00.615 "claim_type": "exclusive_write", 00:19:00.615 "zoned": false, 00:19:00.616 "supported_io_types": { 00:19:00.616 "read": true, 00:19:00.616 "write": true, 00:19:00.616 "unmap": true, 00:19:00.616 "write_zeroes": true, 00:19:00.616 "flush": true, 00:19:00.616 "reset": true, 00:19:00.616 "compare": false, 00:19:00.616 "compare_and_write": false, 00:19:00.616 "abort": true, 00:19:00.616 "nvme_admin": false, 00:19:00.616 "nvme_io": false 00:19:00.616 }, 00:19:00.616 "memory_domains": [ 00:19:00.616 { 00:19:00.616 "dma_device_id": "system", 00:19:00.616 "dma_device_type": 1 00:19:00.616 }, 00:19:00.616 { 00:19:00.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.616 "dma_device_type": 2 00:19:00.616 } 00:19:00.616 ], 00:19:00.616 "driver_specific": {} 00:19:00.616 } 00:19:00.616 ] 00:19:00.616 13:03:04 -- common/autotest_common.sh@893 -- # return 0 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.616 13:03:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.875 13:03:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.875 "name": "Existed_Raid", 00:19:00.875 "uuid": "9feaaec8-6dd8-471c-99ed-6183fca495c7", 00:19:00.875 "strip_size_kb": 64, 00:19:00.875 "state": "configuring", 00:19:00.875 "raid_level": "concat", 00:19:00.875 "superblock": true, 00:19:00.875 "num_base_bdevs": 3, 00:19:00.875 "num_base_bdevs_discovered": 2, 00:19:00.875 "num_base_bdevs_operational": 3, 00:19:00.875 "base_bdevs_list": [ 00:19:00.875 { 00:19:00.875 "name": "BaseBdev1", 00:19:00.875 "uuid": "73734561-48af-4fb7-86aa-e2868673d185", 00:19:00.875 "is_configured": true, 00:19:00.875 "data_offset": 2048, 00:19:00.875 "data_size": 63488 00:19:00.875 }, 00:19:00.875 { 00:19:00.875 "name": "BaseBdev2", 00:19:00.875 "uuid": "fb241f7d-2a03-4a04-8167-bdb51788a5f1", 00:19:00.875 "is_configured": true, 00:19:00.875 "data_offset": 2048, 00:19:00.875 "data_size": 63488 00:19:00.875 }, 00:19:00.875 { 00:19:00.875 "name": "BaseBdev3", 00:19:00.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.875 "is_configured": false, 00:19:00.875 "data_offset": 0, 00:19:00.875 "data_size": 0 00:19:00.875 } 00:19:00.875 ] 00:19:00.875 }' 00:19:00.875 13:03:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.875 13:03:04 -- common/autotest_common.sh@10 -- # set +x 00:19:01.810 13:03:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:02.070 [2024-04-17 13:03:06.005433] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:02.070 [2024-04-17 13:03:06.005895] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:02.070 [2024-04-17 13:03:06.006018] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:02.070 BaseBdev3 00:19:02.070 [2024-04-17 13:03:06.006200] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:02.070 [2024-04-17 13:03:06.006705] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:02.070 [2024-04-17 13:03:06.006824] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:02.070 [2024-04-17 13:03:06.007072] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.070 13:03:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:02.070 13:03:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:02.070 13:03:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:02.070 13:03:06 -- common/autotest_common.sh@887 -- # local i 00:19:02.070 13:03:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:02.070 13:03:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:02.070 13:03:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.330 13:03:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:02.589 [ 00:19:02.589 { 00:19:02.589 "name": "BaseBdev3", 00:19:02.589 "aliases": [ 00:19:02.589 "346f9048-0490-4f62-b7bb-317f2b7b4ff1" 00:19:02.589 ], 00:19:02.589 "product_name": "Malloc disk", 00:19:02.589 "block_size": 512, 00:19:02.589 "num_blocks": 65536, 00:19:02.589 "uuid": "346f9048-0490-4f62-b7bb-317f2b7b4ff1", 00:19:02.589 "assigned_rate_limits": { 00:19:02.589 "rw_ios_per_sec": 0, 00:19:02.589 "rw_mbytes_per_sec": 0, 00:19:02.589 "r_mbytes_per_sec": 0, 00:19:02.589 "w_mbytes_per_sec": 0 00:19:02.589 }, 00:19:02.589 "claimed": true, 00:19:02.589 "claim_type": "exclusive_write", 00:19:02.589 "zoned": false, 00:19:02.589 "supported_io_types": { 00:19:02.589 "read": true, 00:19:02.589 "write": true, 00:19:02.589 "unmap": true, 00:19:02.589 "write_zeroes": true, 00:19:02.589 "flush": true, 00:19:02.589 "reset": true, 00:19:02.589 "compare": false, 00:19:02.589 "compare_and_write": false, 00:19:02.589 "abort": true, 00:19:02.589 "nvme_admin": false, 00:19:02.589 "nvme_io": false 00:19:02.589 }, 00:19:02.589 "memory_domains": [ 00:19:02.589 { 00:19:02.589 "dma_device_id": "system", 00:19:02.589 "dma_device_type": 1 00:19:02.589 }, 00:19:02.589 { 00:19:02.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.589 "dma_device_type": 2 00:19:02.589 } 00:19:02.589 ], 00:19:02.589 "driver_specific": {} 00:19:02.589 } 00:19:02.589 ] 00:19:02.589 13:03:06 -- common/autotest_common.sh@893 -- # return 0 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.589 13:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.846 13:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.846 "name": "Existed_Raid", 00:19:02.846 "uuid": "9feaaec8-6dd8-471c-99ed-6183fca495c7", 00:19:02.846 "strip_size_kb": 64, 00:19:02.846 "state": "online", 00:19:02.846 "raid_level": "concat", 00:19:02.846 "superblock": true, 00:19:02.846 "num_base_bdevs": 3, 00:19:02.846 "num_base_bdevs_discovered": 3, 00:19:02.846 "num_base_bdevs_operational": 3, 00:19:02.846 "base_bdevs_list": [ 00:19:02.846 { 00:19:02.846 "name": "BaseBdev1", 00:19:02.846 "uuid": "73734561-48af-4fb7-86aa-e2868673d185", 00:19:02.846 "is_configured": true, 00:19:02.846 "data_offset": 2048, 00:19:02.846 "data_size": 63488 00:19:02.846 }, 00:19:02.846 { 00:19:02.846 "name": "BaseBdev2", 00:19:02.846 "uuid": "fb241f7d-2a03-4a04-8167-bdb51788a5f1", 00:19:02.846 "is_configured": true, 00:19:02.846 "data_offset": 2048, 00:19:02.846 "data_size": 63488 00:19:02.846 }, 00:19:02.846 { 00:19:02.846 "name": "BaseBdev3", 00:19:02.846 "uuid": "346f9048-0490-4f62-b7bb-317f2b7b4ff1", 00:19:02.846 "is_configured": true, 00:19:02.846 "data_offset": 2048, 00:19:02.846 "data_size": 63488 00:19:02.846 } 00:19:02.846 ] 00:19:02.846 }' 00:19:02.847 13:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.847 13:03:06 -- common/autotest_common.sh@10 -- # set +x 00:19:03.414 13:03:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:03.672 [2024-04-17 13:03:07.782189] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.672 [2024-04-17 13:03:07.782612] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.672 [2024-04-17 13:03:07.782813] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.930 13:03:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.188 13:03:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.188 "name": "Existed_Raid", 00:19:04.188 "uuid": "9feaaec8-6dd8-471c-99ed-6183fca495c7", 00:19:04.188 "strip_size_kb": 64, 00:19:04.188 "state": "offline", 00:19:04.189 "raid_level": "concat", 00:19:04.189 "superblock": true, 00:19:04.189 "num_base_bdevs": 3, 00:19:04.189 "num_base_bdevs_discovered": 2, 00:19:04.189 "num_base_bdevs_operational": 2, 00:19:04.189 "base_bdevs_list": [ 00:19:04.189 { 00:19:04.189 "name": null, 00:19:04.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.189 "is_configured": false, 00:19:04.189 "data_offset": 2048, 00:19:04.189 "data_size": 63488 00:19:04.189 }, 00:19:04.189 { 00:19:04.189 "name": "BaseBdev2", 00:19:04.189 "uuid": "fb241f7d-2a03-4a04-8167-bdb51788a5f1", 00:19:04.189 "is_configured": true, 00:19:04.189 "data_offset": 2048, 00:19:04.189 "data_size": 63488 00:19:04.189 }, 00:19:04.189 { 00:19:04.189 "name": "BaseBdev3", 00:19:04.189 "uuid": "346f9048-0490-4f62-b7bb-317f2b7b4ff1", 00:19:04.189 "is_configured": true, 00:19:04.189 "data_offset": 2048, 00:19:04.189 "data_size": 63488 00:19:04.189 } 00:19:04.189 ] 00:19:04.189 }' 00:19:04.189 13:03:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.189 13:03:08 -- common/autotest_common.sh@10 -- # set +x 00:19:05.123 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:05.123 13:03:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:05.123 13:03:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.123 13:03:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:05.123 13:03:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:05.123 13:03:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:05.123 13:03:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:05.382 [2024-04-17 13:03:09.405933] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:05.382 13:03:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:05.382 13:03:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:05.382 13:03:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.382 13:03:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:05.949 13:03:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:05.949 13:03:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:05.949 13:03:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:05.949 [2024-04-17 13:03:10.041889] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:05.949 [2024-04-17 13:03:10.042177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:06.207 13:03:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:06.207 13:03:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:06.207 13:03:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.207 13:03:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:06.465 13:03:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:06.465 13:03:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:06.465 13:03:10 -- bdev/bdev_raid.sh@287 -- # killprocess 123834 00:19:06.465 13:03:10 -- common/autotest_common.sh@924 -- # '[' -z 123834 ']' 00:19:06.465 13:03:10 -- common/autotest_common.sh@928 -- # kill -0 123834 00:19:06.465 13:03:10 -- common/autotest_common.sh@929 -- # uname 00:19:06.465 13:03:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:06.465 13:03:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 123834 00:19:06.465 13:03:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:06.465 13:03:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:06.465 13:03:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 123834' 00:19:06.465 killing process with pid 123834 00:19:06.465 13:03:10 -- common/autotest_common.sh@943 -- # kill 123834 00:19:06.465 13:03:10 -- common/autotest_common.sh@948 -- # wait 123834 00:19:06.465 [2024-04-17 13:03:10.402619] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.465 [2024-04-17 13:03:10.402733] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.402 13:03:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:07.402 00:19:07.402 real 0m14.680s 00:19:07.402 user 0m26.119s 00:19:07.402 sys 0m1.670s 00:19:07.402 13:03:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:19:07.402 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:19:07.402 ************************************ 00:19:07.402 END TEST raid_state_function_test_sb 00:19:07.402 ************************************ 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:07.660 13:03:11 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:19:07.660 13:03:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:07.660 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:19:07.660 ************************************ 00:19:07.660 START TEST raid_superblock_test 00:19:07.660 ************************************ 00:19:07.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:07.660 13:03:11 -- common/autotest_common.sh@1099 -- # raid_superblock_test concat 3 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@357 -- # raid_pid=124271 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124271 /var/tmp/spdk-raid.sock 00:19:07.660 13:03:11 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:07.660 13:03:11 -- common/autotest_common.sh@817 -- # '[' -z 124271 ']' 00:19:07.660 13:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:07.661 13:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:07.661 13:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:07.661 13:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:07.661 13:03:11 -- common/autotest_common.sh@10 -- # set +x 00:19:07.661 [2024-04-17 13:03:11.669555] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:19:07.661 [2024-04-17 13:03:11.669949] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124271 ] 00:19:07.919 [2024-04-17 13:03:11.833489] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.177 [2024-04-17 13:03:12.081258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.177 [2024-04-17 13:03:12.286788] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.744 13:03:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:08.744 13:03:12 -- common/autotest_common.sh@850 -- # return 0 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:08.744 13:03:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:09.002 malloc1 00:19:09.002 13:03:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:09.002 [2024-04-17 13:03:13.129106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:09.002 [2024-04-17 13:03:13.129399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.002 [2024-04-17 13:03:13.129472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:09.002 [2024-04-17 13:03:13.129731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.002 [2024-04-17 13:03:13.132537] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.002 [2024-04-17 13:03:13.132727] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:09.002 pt1 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.002 13:03:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:09.260 malloc2 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:09.518 [2024-04-17 13:03:13.627008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:09.518 [2024-04-17 13:03:13.627308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.518 [2024-04-17 13:03:13.627395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:09.518 [2024-04-17 13:03:13.627694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.518 [2024-04-17 13:03:13.630341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.518 [2024-04-17 13:03:13.630516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:09.518 pt2 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:09.518 13:03:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:09.838 malloc3 00:19:09.838 13:03:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:10.106 [2024-04-17 13:03:14.153685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:10.106 [2024-04-17 13:03:14.153927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.106 [2024-04-17 13:03:14.154007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:10.106 [2024-04-17 13:03:14.154239] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.106 [2024-04-17 13:03:14.156875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.106 [2024-04-17 13:03:14.157047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:10.106 pt3 00:19:10.106 13:03:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:10.106 13:03:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:10.106 13:03:14 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:10.363 [2024-04-17 13:03:14.385816] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:10.363 [2024-04-17 13:03:14.388149] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:10.363 [2024-04-17 13:03:14.388345] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:10.363 [2024-04-17 13:03:14.388607] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:10.363 [2024-04-17 13:03:14.388745] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:10.363 [2024-04-17 13:03:14.388955] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:10.363 [2024-04-17 13:03:14.389496] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:10.363 [2024-04-17 13:03:14.389611] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:10.363 [2024-04-17 13:03:14.389914] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.363 13:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.620 13:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.620 "name": "raid_bdev1", 00:19:10.620 "uuid": "210172db-3a06-44c0-b3a8-2099410cbc77", 00:19:10.620 "strip_size_kb": 64, 00:19:10.620 "state": "online", 00:19:10.620 "raid_level": "concat", 00:19:10.620 "superblock": true, 00:19:10.620 "num_base_bdevs": 3, 00:19:10.620 "num_base_bdevs_discovered": 3, 00:19:10.620 "num_base_bdevs_operational": 3, 00:19:10.620 "base_bdevs_list": [ 00:19:10.620 { 00:19:10.620 "name": "pt1", 00:19:10.620 "uuid": "15162766-e344-5c40-8c9f-5711e0b2da10", 00:19:10.620 "is_configured": true, 00:19:10.620 "data_offset": 2048, 00:19:10.620 "data_size": 63488 00:19:10.620 }, 00:19:10.620 { 00:19:10.620 "name": "pt2", 00:19:10.620 "uuid": "8280957f-94fe-53b0-8a4e-f2d9dc0fdd8d", 00:19:10.620 "is_configured": true, 00:19:10.620 "data_offset": 2048, 00:19:10.620 "data_size": 63488 00:19:10.620 }, 00:19:10.620 { 00:19:10.620 "name": "pt3", 00:19:10.620 "uuid": "131b68fa-e382-5020-a3b5-39f07e7872f4", 00:19:10.620 "is_configured": true, 00:19:10.620 "data_offset": 2048, 00:19:10.620 "data_size": 63488 00:19:10.620 } 00:19:10.620 ] 00:19:10.620 }' 00:19:10.620 13:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.620 13:03:14 -- common/autotest_common.sh@10 -- # set +x 00:19:11.556 13:03:15 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:11.556 13:03:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:11.556 [2024-04-17 13:03:15.606470] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.556 13:03:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=210172db-3a06-44c0-b3a8-2099410cbc77 00:19:11.556 13:03:15 -- bdev/bdev_raid.sh@380 -- # '[' -z 210172db-3a06-44c0-b3a8-2099410cbc77 ']' 00:19:11.556 13:03:15 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:11.814 [2024-04-17 13:03:15.874227] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:11.814 [2024-04-17 13:03:15.874435] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:11.814 [2024-04-17 13:03:15.874637] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.814 [2024-04-17 13:03:15.874825] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.814 [2024-04-17 13:03:15.874937] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:11.814 13:03:15 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.814 13:03:15 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:12.071 13:03:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:12.071 13:03:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:12.071 13:03:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.071 13:03:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:12.328 13:03:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.328 13:03:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:12.586 13:03:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:12.586 13:03:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:12.857 13:03:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:12.857 13:03:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:13.125 13:03:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:13.125 13:03:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.125 13:03:17 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.125 13:03:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.125 13:03:17 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.125 13:03:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.125 13:03:17 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.125 13:03:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.125 13:03:17 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.125 13:03:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.125 13:03:17 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.125 13:03:17 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:13.125 13:03:17 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:13.382 [2024-04-17 13:03:17.323353] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:13.382 [2024-04-17 13:03:17.325975] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:13.382 [2024-04-17 13:03:17.326152] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:13.382 [2024-04-17 13:03:17.326252] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:13.382 [2024-04-17 13:03:17.326553] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:13.382 [2024-04-17 13:03:17.326719] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:13.382 [2024-04-17 13:03:17.326870] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.382 [2024-04-17 13:03:17.326979] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:19:13.382 request: 00:19:13.382 { 00:19:13.382 "name": "raid_bdev1", 00:19:13.382 "raid_level": "concat", 00:19:13.382 "base_bdevs": [ 00:19:13.382 "malloc1", 00:19:13.382 "malloc2", 00:19:13.382 "malloc3" 00:19:13.382 ], 00:19:13.382 "superblock": false, 00:19:13.382 "strip_size_kb": 64, 00:19:13.382 "method": "bdev_raid_create", 00:19:13.382 "req_id": 1 00:19:13.382 } 00:19:13.382 Got JSON-RPC error response 00:19:13.382 response: 00:19:13.382 { 00:19:13.382 "code": -17, 00:19:13.382 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:13.382 } 00:19:13.382 13:03:17 -- common/autotest_common.sh@641 -- # es=1 00:19:13.382 13:03:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:13.382 13:03:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:13.382 13:03:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:13.382 13:03:17 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.382 13:03:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:13.639 13:03:17 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:13.639 13:03:17 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:13.639 13:03:17 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:13.897 [2024-04-17 13:03:17.871402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:13.897 [2024-04-17 13:03:17.871851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.897 [2024-04-17 13:03:17.871993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:13.897 [2024-04-17 13:03:17.872110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.897 [2024-04-17 13:03:17.874689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.897 [2024-04-17 13:03:17.874849] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:13.897 [2024-04-17 13:03:17.875077] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:13.897 [2024-04-17 13:03:17.875247] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:13.897 pt1 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.897 13:03:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.154 13:03:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.154 "name": "raid_bdev1", 00:19:14.154 "uuid": "210172db-3a06-44c0-b3a8-2099410cbc77", 00:19:14.154 "strip_size_kb": 64, 00:19:14.154 "state": "configuring", 00:19:14.154 "raid_level": "concat", 00:19:14.154 "superblock": true, 00:19:14.154 "num_base_bdevs": 3, 00:19:14.154 "num_base_bdevs_discovered": 1, 00:19:14.154 "num_base_bdevs_operational": 3, 00:19:14.154 "base_bdevs_list": [ 00:19:14.154 { 00:19:14.154 "name": "pt1", 00:19:14.154 "uuid": "15162766-e344-5c40-8c9f-5711e0b2da10", 00:19:14.154 "is_configured": true, 00:19:14.154 "data_offset": 2048, 00:19:14.154 "data_size": 63488 00:19:14.154 }, 00:19:14.154 { 00:19:14.154 "name": null, 00:19:14.154 "uuid": "8280957f-94fe-53b0-8a4e-f2d9dc0fdd8d", 00:19:14.154 "is_configured": false, 00:19:14.154 "data_offset": 2048, 00:19:14.154 "data_size": 63488 00:19:14.154 }, 00:19:14.154 { 00:19:14.154 "name": null, 00:19:14.154 "uuid": "131b68fa-e382-5020-a3b5-39f07e7872f4", 00:19:14.154 "is_configured": false, 00:19:14.154 "data_offset": 2048, 00:19:14.154 "data_size": 63488 00:19:14.154 } 00:19:14.154 ] 00:19:14.154 }' 00:19:14.154 13:03:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.154 13:03:18 -- common/autotest_common.sh@10 -- # set +x 00:19:14.720 13:03:18 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:14.720 13:03:18 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:14.978 [2024-04-17 13:03:19.079884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:14.978 [2024-04-17 13:03:19.080171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.978 [2024-04-17 13:03:19.080329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:14.978 [2024-04-17 13:03:19.080456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.978 [2024-04-17 13:03:19.081004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.978 [2024-04-17 13:03:19.081148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:14.978 [2024-04-17 13:03:19.081395] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:14.978 [2024-04-17 13:03:19.081532] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:14.978 pt2 00:19:14.978 13:03:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:15.236 [2024-04-17 13:03:19.300016] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.236 13:03:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.493 13:03:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.493 "name": "raid_bdev1", 00:19:15.493 "uuid": "210172db-3a06-44c0-b3a8-2099410cbc77", 00:19:15.493 "strip_size_kb": 64, 00:19:15.493 "state": "configuring", 00:19:15.494 "raid_level": "concat", 00:19:15.494 "superblock": true, 00:19:15.494 "num_base_bdevs": 3, 00:19:15.494 "num_base_bdevs_discovered": 1, 00:19:15.494 "num_base_bdevs_operational": 3, 00:19:15.494 "base_bdevs_list": [ 00:19:15.494 { 00:19:15.494 "name": "pt1", 00:19:15.494 "uuid": "15162766-e344-5c40-8c9f-5711e0b2da10", 00:19:15.494 "is_configured": true, 00:19:15.494 "data_offset": 2048, 00:19:15.494 "data_size": 63488 00:19:15.494 }, 00:19:15.494 { 00:19:15.494 "name": null, 00:19:15.494 "uuid": "8280957f-94fe-53b0-8a4e-f2d9dc0fdd8d", 00:19:15.494 "is_configured": false, 00:19:15.494 "data_offset": 2048, 00:19:15.494 "data_size": 63488 00:19:15.494 }, 00:19:15.494 { 00:19:15.494 "name": null, 00:19:15.494 "uuid": "131b68fa-e382-5020-a3b5-39f07e7872f4", 00:19:15.494 "is_configured": false, 00:19:15.494 "data_offset": 2048, 00:19:15.494 "data_size": 63488 00:19:15.494 } 00:19:15.494 ] 00:19:15.494 }' 00:19:15.494 13:03:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.494 13:03:19 -- common/autotest_common.sh@10 -- # set +x 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.449 [2024-04-17 13:03:20.504276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.449 [2024-04-17 13:03:20.504551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.449 [2024-04-17 13:03:20.504631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:16.449 [2024-04-17 13:03:20.504847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.449 [2024-04-17 13:03:20.505407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.449 [2024-04-17 13:03:20.505555] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.449 [2024-04-17 13:03:20.505778] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:16.449 [2024-04-17 13:03:20.505902] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:16.449 pt2 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:16.449 13:03:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:16.737 [2024-04-17 13:03:20.764382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:16.737 [2024-04-17 13:03:20.764657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.737 [2024-04-17 13:03:20.764734] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:16.737 [2024-04-17 13:03:20.764979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.737 [2024-04-17 13:03:20.765510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.737 [2024-04-17 13:03:20.765665] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:16.737 [2024-04-17 13:03:20.765943] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:16.737 [2024-04-17 13:03:20.766079] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:16.737 [2024-04-17 13:03:20.766319] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:19:16.737 [2024-04-17 13:03:20.766432] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:16.737 [2024-04-17 13:03:20.766589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:16.737 [2024-04-17 13:03:20.766967] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:19:16.737 [2024-04-17 13:03:20.767084] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:19:16.737 [2024-04-17 13:03:20.767319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:16.737 pt3 00:19:16.737 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:16.737 13:03:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.738 13:03:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.996 13:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.996 "name": "raid_bdev1", 00:19:16.996 "uuid": "210172db-3a06-44c0-b3a8-2099410cbc77", 00:19:16.996 "strip_size_kb": 64, 00:19:16.996 "state": "online", 00:19:16.996 "raid_level": "concat", 00:19:16.996 "superblock": true, 00:19:16.996 "num_base_bdevs": 3, 00:19:16.996 "num_base_bdevs_discovered": 3, 00:19:16.996 "num_base_bdevs_operational": 3, 00:19:16.996 "base_bdevs_list": [ 00:19:16.996 { 00:19:16.996 "name": "pt1", 00:19:16.996 "uuid": "15162766-e344-5c40-8c9f-5711e0b2da10", 00:19:16.996 "is_configured": true, 00:19:16.996 "data_offset": 2048, 00:19:16.996 "data_size": 63488 00:19:16.996 }, 00:19:16.996 { 00:19:16.996 "name": "pt2", 00:19:16.996 "uuid": "8280957f-94fe-53b0-8a4e-f2d9dc0fdd8d", 00:19:16.996 "is_configured": true, 00:19:16.996 "data_offset": 2048, 00:19:16.996 "data_size": 63488 00:19:16.996 }, 00:19:16.996 { 00:19:16.996 "name": "pt3", 00:19:16.996 "uuid": "131b68fa-e382-5020-a3b5-39f07e7872f4", 00:19:16.996 "is_configured": true, 00:19:16.996 "data_offset": 2048, 00:19:16.996 "data_size": 63488 00:19:16.996 } 00:19:16.996 ] 00:19:16.996 }' 00:19:16.996 13:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.996 13:03:21 -- common/autotest_common.sh@10 -- # set +x 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:17.931 [2024-04-17 13:03:21.956928] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@430 -- # '[' 210172db-3a06-44c0-b3a8-2099410cbc77 '!=' 210172db-3a06-44c0-b3a8-2099410cbc77 ']' 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:17.931 13:03:21 -- bdev/bdev_raid.sh@511 -- # killprocess 124271 00:19:17.931 13:03:21 -- common/autotest_common.sh@924 -- # '[' -z 124271 ']' 00:19:17.931 13:03:21 -- common/autotest_common.sh@928 -- # kill -0 124271 00:19:17.931 13:03:21 -- common/autotest_common.sh@929 -- # uname 00:19:17.931 13:03:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:17.931 13:03:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 124271 00:19:17.931 killing process with pid 124271 00:19:17.931 13:03:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:17.931 13:03:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:17.931 13:03:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 124271' 00:19:17.931 13:03:21 -- common/autotest_common.sh@943 -- # kill 124271 00:19:17.931 13:03:21 -- common/autotest_common.sh@948 -- # wait 124271 00:19:17.931 [2024-04-17 13:03:21.993068] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.931 [2024-04-17 13:03:21.993144] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.931 [2024-04-17 13:03:21.993206] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.931 [2024-04-17 13:03:21.993317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:19:18.189 [2024-04-17 13:03:22.235209] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.563 ************************************ 00:19:19.563 END TEST raid_superblock_test 00:19:19.563 ************************************ 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:19.563 00:19:19.563 real 0m11.726s 00:19:19.563 user 0m20.678s 00:19:19.563 sys 0m1.177s 00:19:19.563 13:03:23 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:19:19.563 13:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:19.563 13:03:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:19:19.563 13:03:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:19.563 13:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.563 ************************************ 00:19:19.563 START TEST raid_state_function_test 00:19:19.563 ************************************ 00:19:19.563 13:03:23 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 3 false 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:19.563 Process raid pid: 124613 00:19:19.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=124613 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124613' 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:19.563 13:03:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124613 /var/tmp/spdk-raid.sock 00:19:19.563 13:03:23 -- common/autotest_common.sh@817 -- # '[' -z 124613 ']' 00:19:19.563 13:03:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:19.563 13:03:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.563 13:03:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:19.563 13:03:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.563 13:03:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.563 [2024-04-17 13:03:23.480606] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:19:19.563 [2024-04-17 13:03:23.480941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.563 [2024-04-17 13:03:23.652738] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.876 [2024-04-17 13:03:23.898535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.152 [2024-04-17 13:03:24.099229] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.410 13:03:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.410 13:03:24 -- common/autotest_common.sh@850 -- # return 0 00:19:20.410 13:03:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:20.669 [2024-04-17 13:03:24.610708] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.669 [2024-04-17 13:03:24.610991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.669 [2024-04-17 13:03:24.611107] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.669 [2024-04-17 13:03:24.611243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.669 [2024-04-17 13:03:24.611343] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:20.669 [2024-04-17 13:03:24.611426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.669 13:03:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.928 13:03:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.928 "name": "Existed_Raid", 00:19:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.928 "strip_size_kb": 0, 00:19:20.928 "state": "configuring", 00:19:20.928 "raid_level": "raid1", 00:19:20.928 "superblock": false, 00:19:20.928 "num_base_bdevs": 3, 00:19:20.928 "num_base_bdevs_discovered": 0, 00:19:20.928 "num_base_bdevs_operational": 3, 00:19:20.928 "base_bdevs_list": [ 00:19:20.928 { 00:19:20.928 "name": "BaseBdev1", 00:19:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.928 "is_configured": false, 00:19:20.928 "data_offset": 0, 00:19:20.928 "data_size": 0 00:19:20.928 }, 00:19:20.928 { 00:19:20.928 "name": "BaseBdev2", 00:19:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.928 "is_configured": false, 00:19:20.928 "data_offset": 0, 00:19:20.928 "data_size": 0 00:19:20.928 }, 00:19:20.928 { 00:19:20.928 "name": "BaseBdev3", 00:19:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.928 "is_configured": false, 00:19:20.928 "data_offset": 0, 00:19:20.928 "data_size": 0 00:19:20.928 } 00:19:20.928 ] 00:19:20.928 }' 00:19:20.928 13:03:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.928 13:03:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.496 13:03:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:21.755 [2024-04-17 13:03:25.802853] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.755 [2024-04-17 13:03:25.803118] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:21.755 13:03:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:22.014 [2024-04-17 13:03:26.066978] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.014 [2024-04-17 13:03:26.067226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.014 [2024-04-17 13:03:26.067345] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.014 [2024-04-17 13:03:26.067506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.014 [2024-04-17 13:03:26.067611] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.014 [2024-04-17 13:03:26.067699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.014 13:03:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.273 [2024-04-17 13:03:26.359606] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:22.273 BaseBdev1 00:19:22.273 13:03:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:22.273 13:03:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:22.273 13:03:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:22.273 13:03:26 -- common/autotest_common.sh@887 -- # local i 00:19:22.273 13:03:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:22.273 13:03:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:22.273 13:03:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.532 13:03:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:22.791 [ 00:19:22.791 { 00:19:22.791 "name": "BaseBdev1", 00:19:22.791 "aliases": [ 00:19:22.791 "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b" 00:19:22.791 ], 00:19:22.791 "product_name": "Malloc disk", 00:19:22.791 "block_size": 512, 00:19:22.791 "num_blocks": 65536, 00:19:22.791 "uuid": "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b", 00:19:22.791 "assigned_rate_limits": { 00:19:22.791 "rw_ios_per_sec": 0, 00:19:22.791 "rw_mbytes_per_sec": 0, 00:19:22.791 "r_mbytes_per_sec": 0, 00:19:22.791 "w_mbytes_per_sec": 0 00:19:22.791 }, 00:19:22.791 "claimed": true, 00:19:22.791 "claim_type": "exclusive_write", 00:19:22.791 "zoned": false, 00:19:22.791 "supported_io_types": { 00:19:22.791 "read": true, 00:19:22.791 "write": true, 00:19:22.791 "unmap": true, 00:19:22.791 "write_zeroes": true, 00:19:22.791 "flush": true, 00:19:22.791 "reset": true, 00:19:22.791 "compare": false, 00:19:22.791 "compare_and_write": false, 00:19:22.791 "abort": true, 00:19:22.791 "nvme_admin": false, 00:19:22.791 "nvme_io": false 00:19:22.791 }, 00:19:22.791 "memory_domains": [ 00:19:22.791 { 00:19:22.791 "dma_device_id": "system", 00:19:22.791 "dma_device_type": 1 00:19:22.791 }, 00:19:22.791 { 00:19:22.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.791 "dma_device_type": 2 00:19:22.791 } 00:19:22.791 ], 00:19:22.791 "driver_specific": {} 00:19:22.791 } 00:19:22.791 ] 00:19:22.791 13:03:26 -- common/autotest_common.sh@893 -- # return 0 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.791 13:03:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.050 13:03:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.050 "name": "Existed_Raid", 00:19:23.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.050 "strip_size_kb": 0, 00:19:23.050 "state": "configuring", 00:19:23.050 "raid_level": "raid1", 00:19:23.050 "superblock": false, 00:19:23.050 "num_base_bdevs": 3, 00:19:23.050 "num_base_bdevs_discovered": 1, 00:19:23.050 "num_base_bdevs_operational": 3, 00:19:23.050 "base_bdevs_list": [ 00:19:23.050 { 00:19:23.050 "name": "BaseBdev1", 00:19:23.050 "uuid": "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b", 00:19:23.050 "is_configured": true, 00:19:23.050 "data_offset": 0, 00:19:23.050 "data_size": 65536 00:19:23.050 }, 00:19:23.050 { 00:19:23.050 "name": "BaseBdev2", 00:19:23.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.050 "is_configured": false, 00:19:23.050 "data_offset": 0, 00:19:23.050 "data_size": 0 00:19:23.050 }, 00:19:23.050 { 00:19:23.050 "name": "BaseBdev3", 00:19:23.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.050 "is_configured": false, 00:19:23.050 "data_offset": 0, 00:19:23.050 "data_size": 0 00:19:23.050 } 00:19:23.050 ] 00:19:23.050 }' 00:19:23.050 13:03:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.050 13:03:27 -- common/autotest_common.sh@10 -- # set +x 00:19:24.011 13:03:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.011 [2024-04-17 13:03:28.012240] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.011 [2024-04-17 13:03:28.012505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:24.011 13:03:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:24.011 13:03:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:24.269 [2024-04-17 13:03:28.280309] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.269 [2024-04-17 13:03:28.282459] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.269 [2024-04-17 13:03:28.282630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.269 [2024-04-17 13:03:28.282738] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.269 [2024-04-17 13:03:28.282901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.270 13:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.528 13:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.528 "name": "Existed_Raid", 00:19:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.528 "strip_size_kb": 0, 00:19:24.528 "state": "configuring", 00:19:24.528 "raid_level": "raid1", 00:19:24.528 "superblock": false, 00:19:24.528 "num_base_bdevs": 3, 00:19:24.528 "num_base_bdevs_discovered": 1, 00:19:24.528 "num_base_bdevs_operational": 3, 00:19:24.528 "base_bdevs_list": [ 00:19:24.528 { 00:19:24.528 "name": "BaseBdev1", 00:19:24.528 "uuid": "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b", 00:19:24.528 "is_configured": true, 00:19:24.528 "data_offset": 0, 00:19:24.528 "data_size": 65536 00:19:24.528 }, 00:19:24.528 { 00:19:24.528 "name": "BaseBdev2", 00:19:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.528 "is_configured": false, 00:19:24.528 "data_offset": 0, 00:19:24.528 "data_size": 0 00:19:24.528 }, 00:19:24.528 { 00:19:24.528 "name": "BaseBdev3", 00:19:24.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.528 "is_configured": false, 00:19:24.528 "data_offset": 0, 00:19:24.529 "data_size": 0 00:19:24.529 } 00:19:24.529 ] 00:19:24.529 }' 00:19:24.529 13:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.529 13:03:28 -- common/autotest_common.sh@10 -- # set +x 00:19:25.095 13:03:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:25.661 [2024-04-17 13:03:29.525157] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:25.661 BaseBdev2 00:19:25.661 13:03:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:25.661 13:03:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:25.661 13:03:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:25.661 13:03:29 -- common/autotest_common.sh@887 -- # local i 00:19:25.661 13:03:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:25.661 13:03:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:25.662 13:03:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.662 13:03:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:25.920 [ 00:19:25.920 { 00:19:25.920 "name": "BaseBdev2", 00:19:25.920 "aliases": [ 00:19:25.920 "0f5a619d-99ce-454c-9b34-dd3eb810a4d3" 00:19:25.920 ], 00:19:25.920 "product_name": "Malloc disk", 00:19:25.920 "block_size": 512, 00:19:25.920 "num_blocks": 65536, 00:19:25.920 "uuid": "0f5a619d-99ce-454c-9b34-dd3eb810a4d3", 00:19:25.920 "assigned_rate_limits": { 00:19:25.920 "rw_ios_per_sec": 0, 00:19:25.920 "rw_mbytes_per_sec": 0, 00:19:25.920 "r_mbytes_per_sec": 0, 00:19:25.920 "w_mbytes_per_sec": 0 00:19:25.920 }, 00:19:25.920 "claimed": true, 00:19:25.920 "claim_type": "exclusive_write", 00:19:25.920 "zoned": false, 00:19:25.920 "supported_io_types": { 00:19:25.920 "read": true, 00:19:25.920 "write": true, 00:19:25.920 "unmap": true, 00:19:25.920 "write_zeroes": true, 00:19:25.920 "flush": true, 00:19:25.920 "reset": true, 00:19:25.920 "compare": false, 00:19:25.920 "compare_and_write": false, 00:19:25.920 "abort": true, 00:19:25.920 "nvme_admin": false, 00:19:25.920 "nvme_io": false 00:19:25.920 }, 00:19:25.920 "memory_domains": [ 00:19:25.920 { 00:19:25.920 "dma_device_id": "system", 00:19:25.920 "dma_device_type": 1 00:19:25.920 }, 00:19:25.920 { 00:19:25.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.920 "dma_device_type": 2 00:19:25.920 } 00:19:25.920 ], 00:19:25.920 "driver_specific": {} 00:19:25.920 } 00:19:25.920 ] 00:19:25.920 13:03:29 -- common/autotest_common.sh@893 -- # return 0 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.920 13:03:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.180 13:03:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.180 "name": "Existed_Raid", 00:19:26.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.180 "strip_size_kb": 0, 00:19:26.180 "state": "configuring", 00:19:26.180 "raid_level": "raid1", 00:19:26.180 "superblock": false, 00:19:26.180 "num_base_bdevs": 3, 00:19:26.180 "num_base_bdevs_discovered": 2, 00:19:26.180 "num_base_bdevs_operational": 3, 00:19:26.180 "base_bdevs_list": [ 00:19:26.180 { 00:19:26.180 "name": "BaseBdev1", 00:19:26.180 "uuid": "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b", 00:19:26.180 "is_configured": true, 00:19:26.180 "data_offset": 0, 00:19:26.180 "data_size": 65536 00:19:26.180 }, 00:19:26.180 { 00:19:26.180 "name": "BaseBdev2", 00:19:26.180 "uuid": "0f5a619d-99ce-454c-9b34-dd3eb810a4d3", 00:19:26.180 "is_configured": true, 00:19:26.180 "data_offset": 0, 00:19:26.180 "data_size": 65536 00:19:26.180 }, 00:19:26.180 { 00:19:26.180 "name": "BaseBdev3", 00:19:26.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.180 "is_configured": false, 00:19:26.180 "data_offset": 0, 00:19:26.180 "data_size": 0 00:19:26.180 } 00:19:26.180 ] 00:19:26.180 }' 00:19:26.180 13:03:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.180 13:03:30 -- common/autotest_common.sh@10 -- # set +x 00:19:27.145 13:03:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:27.424 [2024-04-17 13:03:31.297408] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.424 [2024-04-17 13:03:31.298830] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:27.424 [2024-04-17 13:03:31.298877] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:27.424 [2024-04-17 13:03:31.299173] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:19:27.424 [2024-04-17 13:03:31.299707] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:27.424 [2024-04-17 13:03:31.299853] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:27.424 [2024-04-17 13:03:31.300242] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.424 BaseBdev3 00:19:27.424 13:03:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:27.424 13:03:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:27.424 13:03:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:27.424 13:03:31 -- common/autotest_common.sh@887 -- # local i 00:19:27.424 13:03:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:27.424 13:03:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:27.424 13:03:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.683 13:03:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:27.941 [ 00:19:27.941 { 00:19:27.941 "name": "BaseBdev3", 00:19:27.941 "aliases": [ 00:19:27.941 "f2c08d0d-429a-44d5-a8c5-14926e0e46c0" 00:19:27.941 ], 00:19:27.941 "product_name": "Malloc disk", 00:19:27.941 "block_size": 512, 00:19:27.941 "num_blocks": 65536, 00:19:27.941 "uuid": "f2c08d0d-429a-44d5-a8c5-14926e0e46c0", 00:19:27.941 "assigned_rate_limits": { 00:19:27.941 "rw_ios_per_sec": 0, 00:19:27.941 "rw_mbytes_per_sec": 0, 00:19:27.941 "r_mbytes_per_sec": 0, 00:19:27.941 "w_mbytes_per_sec": 0 00:19:27.941 }, 00:19:27.941 "claimed": true, 00:19:27.941 "claim_type": "exclusive_write", 00:19:27.941 "zoned": false, 00:19:27.941 "supported_io_types": { 00:19:27.941 "read": true, 00:19:27.941 "write": true, 00:19:27.941 "unmap": true, 00:19:27.941 "write_zeroes": true, 00:19:27.941 "flush": true, 00:19:27.941 "reset": true, 00:19:27.941 "compare": false, 00:19:27.941 "compare_and_write": false, 00:19:27.941 "abort": true, 00:19:27.941 "nvme_admin": false, 00:19:27.941 "nvme_io": false 00:19:27.941 }, 00:19:27.941 "memory_domains": [ 00:19:27.941 { 00:19:27.941 "dma_device_id": "system", 00:19:27.941 "dma_device_type": 1 00:19:27.941 }, 00:19:27.941 { 00:19:27.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.941 "dma_device_type": 2 00:19:27.941 } 00:19:27.941 ], 00:19:27.942 "driver_specific": {} 00:19:27.942 } 00:19:27.942 ] 00:19:27.942 13:03:31 -- common/autotest_common.sh@893 -- # return 0 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.942 13:03:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.200 13:03:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.200 "name": "Existed_Raid", 00:19:28.200 "uuid": "c696035c-cd77-4e04-9705-778b8b4dfff6", 00:19:28.200 "strip_size_kb": 0, 00:19:28.200 "state": "online", 00:19:28.200 "raid_level": "raid1", 00:19:28.200 "superblock": false, 00:19:28.200 "num_base_bdevs": 3, 00:19:28.200 "num_base_bdevs_discovered": 3, 00:19:28.201 "num_base_bdevs_operational": 3, 00:19:28.201 "base_bdevs_list": [ 00:19:28.201 { 00:19:28.201 "name": "BaseBdev1", 00:19:28.201 "uuid": "76bb62aa-d0e6-4220-a2e4-b8faea41ca9b", 00:19:28.201 "is_configured": true, 00:19:28.201 "data_offset": 0, 00:19:28.201 "data_size": 65536 00:19:28.201 }, 00:19:28.201 { 00:19:28.201 "name": "BaseBdev2", 00:19:28.201 "uuid": "0f5a619d-99ce-454c-9b34-dd3eb810a4d3", 00:19:28.201 "is_configured": true, 00:19:28.201 "data_offset": 0, 00:19:28.201 "data_size": 65536 00:19:28.201 }, 00:19:28.201 { 00:19:28.201 "name": "BaseBdev3", 00:19:28.201 "uuid": "f2c08d0d-429a-44d5-a8c5-14926e0e46c0", 00:19:28.201 "is_configured": true, 00:19:28.201 "data_offset": 0, 00:19:28.201 "data_size": 65536 00:19:28.201 } 00:19:28.201 ] 00:19:28.201 }' 00:19:28.201 13:03:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.201 13:03:32 -- common/autotest_common.sh@10 -- # set +x 00:19:28.767 13:03:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:29.026 [2024-04-17 13:03:33.058216] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.026 13:03:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.594 13:03:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.594 "name": "Existed_Raid", 00:19:29.594 "uuid": "c696035c-cd77-4e04-9705-778b8b4dfff6", 00:19:29.594 "strip_size_kb": 0, 00:19:29.594 "state": "online", 00:19:29.594 "raid_level": "raid1", 00:19:29.594 "superblock": false, 00:19:29.594 "num_base_bdevs": 3, 00:19:29.594 "num_base_bdevs_discovered": 2, 00:19:29.594 "num_base_bdevs_operational": 2, 00:19:29.594 "base_bdevs_list": [ 00:19:29.594 { 00:19:29.594 "name": null, 00:19:29.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.594 "is_configured": false, 00:19:29.594 "data_offset": 0, 00:19:29.594 "data_size": 65536 00:19:29.594 }, 00:19:29.594 { 00:19:29.594 "name": "BaseBdev2", 00:19:29.594 "uuid": "0f5a619d-99ce-454c-9b34-dd3eb810a4d3", 00:19:29.594 "is_configured": true, 00:19:29.594 "data_offset": 0, 00:19:29.594 "data_size": 65536 00:19:29.594 }, 00:19:29.594 { 00:19:29.594 "name": "BaseBdev3", 00:19:29.594 "uuid": "f2c08d0d-429a-44d5-a8c5-14926e0e46c0", 00:19:29.594 "is_configured": true, 00:19:29.594 "data_offset": 0, 00:19:29.594 "data_size": 65536 00:19:29.594 } 00:19:29.594 ] 00:19:29.594 }' 00:19:29.594 13:03:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.594 13:03:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.161 13:03:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:30.161 13:03:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:30.161 13:03:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.161 13:03:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:30.419 13:03:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:30.419 13:03:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.419 13:03:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:30.678 [2024-04-17 13:03:34.682491] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:30.678 13:03:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:30.678 13:03:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:30.678 13:03:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.678 13:03:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:30.937 13:03:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:30.937 13:03:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.937 13:03:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:31.233 [2024-04-17 13:03:35.324118] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:31.233 [2024-04-17 13:03:35.324417] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.492 [2024-04-17 13:03:35.405275] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.492 [2024-04-17 13:03:35.405610] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.492 [2024-04-17 13:03:35.407959] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:31.492 13:03:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:31.492 13:03:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:31.492 13:03:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.492 13:03:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:31.752 13:03:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:31.752 13:03:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:31.752 13:03:35 -- bdev/bdev_raid.sh@287 -- # killprocess 124613 00:19:31.752 13:03:35 -- common/autotest_common.sh@924 -- # '[' -z 124613 ']' 00:19:31.752 13:03:35 -- common/autotest_common.sh@928 -- # kill -0 124613 00:19:31.752 13:03:35 -- common/autotest_common.sh@929 -- # uname 00:19:31.752 13:03:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:31.752 13:03:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 124613 00:19:31.752 killing process with pid 124613 00:19:31.752 13:03:35 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:31.752 13:03:35 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:31.752 13:03:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 124613' 00:19:31.752 13:03:35 -- common/autotest_common.sh@943 -- # kill 124613 00:19:31.752 13:03:35 -- common/autotest_common.sh@948 -- # wait 124613 00:19:31.752 [2024-04-17 13:03:35.702506] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.752 [2024-04-17 13:03:35.702629] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.690 ************************************ 00:19:32.690 END TEST raid_state_function_test 00:19:32.690 ************************************ 00:19:32.690 13:03:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:32.690 00:19:32.690 real 0m13.413s 00:19:32.690 user 0m23.934s 00:19:32.690 sys 0m1.429s 00:19:32.690 13:03:36 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:19:32.690 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:19:32.950 13:03:36 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:32.950 13:03:36 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:19:32.950 13:03:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:32.950 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:19:32.950 ************************************ 00:19:32.950 START TEST raid_state_function_test_sb 00:19:32.950 ************************************ 00:19:32.950 13:03:36 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 3 true 00:19:32.950 13:03:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:32.950 13:03:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.951 Process raid pid: 125040 00:19:32.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=125040 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125040' 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125040 /var/tmp/spdk-raid.sock 00:19:32.951 13:03:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:32.951 13:03:36 -- common/autotest_common.sh@817 -- # '[' -z 125040 ']' 00:19:32.951 13:03:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:32.951 13:03:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:32.951 13:03:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:32.951 13:03:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:32.951 13:03:36 -- common/autotest_common.sh@10 -- # set +x 00:19:32.951 [2024-04-17 13:03:36.973363] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:19:32.951 [2024-04-17 13:03:36.975615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.210 [2024-04-17 13:03:37.141986] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.469 [2024-04-17 13:03:37.402646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.469 [2024-04-17 13:03:37.609659] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.036 13:03:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:34.036 13:03:37 -- common/autotest_common.sh@850 -- # return 0 00:19:34.036 13:03:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:34.295 [2024-04-17 13:03:38.235705] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.295 [2024-04-17 13:03:38.236026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.295 [2024-04-17 13:03:38.236138] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.295 [2024-04-17 13:03:38.236269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.295 [2024-04-17 13:03:38.236386] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:34.295 [2024-04-17 13:03:38.236479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:34.295 13:03:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.296 13:03:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.571 13:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.571 "name": "Existed_Raid", 00:19:34.571 "uuid": "6693d758-5382-47c6-9fe4-fd3fa0e2f1bb", 00:19:34.571 "strip_size_kb": 0, 00:19:34.571 "state": "configuring", 00:19:34.571 "raid_level": "raid1", 00:19:34.571 "superblock": true, 00:19:34.571 "num_base_bdevs": 3, 00:19:34.571 "num_base_bdevs_discovered": 0, 00:19:34.571 "num_base_bdevs_operational": 3, 00:19:34.571 "base_bdevs_list": [ 00:19:34.571 { 00:19:34.571 "name": "BaseBdev1", 00:19:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.571 "is_configured": false, 00:19:34.571 "data_offset": 0, 00:19:34.571 "data_size": 0 00:19:34.571 }, 00:19:34.571 { 00:19:34.571 "name": "BaseBdev2", 00:19:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.571 "is_configured": false, 00:19:34.571 "data_offset": 0, 00:19:34.571 "data_size": 0 00:19:34.571 }, 00:19:34.571 { 00:19:34.571 "name": "BaseBdev3", 00:19:34.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.571 "is_configured": false, 00:19:34.571 "data_offset": 0, 00:19:34.571 "data_size": 0 00:19:34.571 } 00:19:34.571 ] 00:19:34.571 }' 00:19:34.571 13:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.571 13:03:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.138 13:03:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:35.396 [2024-04-17 13:03:39.527917] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.396 [2024-04-17 13:03:39.528162] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:35.654 13:03:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:35.654 [2024-04-17 13:03:39.784045] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:35.654 [2024-04-17 13:03:39.784271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:35.654 [2024-04-17 13:03:39.784378] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.654 [2024-04-17 13:03:39.784444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.654 [2024-04-17 13:03:39.784541] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:35.654 [2024-04-17 13:03:39.784607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:35.654 13:03:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.913 [2024-04-17 13:03:40.043264] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.913 BaseBdev1 00:19:35.913 13:03:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:35.913 13:03:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:35.913 13:03:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:35.913 13:03:40 -- common/autotest_common.sh@887 -- # local i 00:19:35.913 13:03:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:35.913 13:03:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:35.913 13:03:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.172 13:03:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:36.430 [ 00:19:36.430 { 00:19:36.430 "name": "BaseBdev1", 00:19:36.430 "aliases": [ 00:19:36.430 "f1425f28-6bbe-4a78-b2b8-17d86157453e" 00:19:36.430 ], 00:19:36.430 "product_name": "Malloc disk", 00:19:36.430 "block_size": 512, 00:19:36.430 "num_blocks": 65536, 00:19:36.430 "uuid": "f1425f28-6bbe-4a78-b2b8-17d86157453e", 00:19:36.430 "assigned_rate_limits": { 00:19:36.430 "rw_ios_per_sec": 0, 00:19:36.430 "rw_mbytes_per_sec": 0, 00:19:36.430 "r_mbytes_per_sec": 0, 00:19:36.430 "w_mbytes_per_sec": 0 00:19:36.430 }, 00:19:36.430 "claimed": true, 00:19:36.430 "claim_type": "exclusive_write", 00:19:36.430 "zoned": false, 00:19:36.430 "supported_io_types": { 00:19:36.430 "read": true, 00:19:36.430 "write": true, 00:19:36.430 "unmap": true, 00:19:36.430 "write_zeroes": true, 00:19:36.430 "flush": true, 00:19:36.430 "reset": true, 00:19:36.430 "compare": false, 00:19:36.430 "compare_and_write": false, 00:19:36.430 "abort": true, 00:19:36.430 "nvme_admin": false, 00:19:36.430 "nvme_io": false 00:19:36.430 }, 00:19:36.430 "memory_domains": [ 00:19:36.430 { 00:19:36.430 "dma_device_id": "system", 00:19:36.430 "dma_device_type": 1 00:19:36.430 }, 00:19:36.430 { 00:19:36.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.430 "dma_device_type": 2 00:19:36.430 } 00:19:36.430 ], 00:19:36.430 "driver_specific": {} 00:19:36.430 } 00:19:36.430 ] 00:19:36.430 13:03:40 -- common/autotest_common.sh@893 -- # return 0 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.430 13:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.688 13:03:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.688 "name": "Existed_Raid", 00:19:36.688 "uuid": "6ac9b273-fa56-4780-b277-79cbed09ea6c", 00:19:36.688 "strip_size_kb": 0, 00:19:36.688 "state": "configuring", 00:19:36.688 "raid_level": "raid1", 00:19:36.688 "superblock": true, 00:19:36.688 "num_base_bdevs": 3, 00:19:36.688 "num_base_bdevs_discovered": 1, 00:19:36.688 "num_base_bdevs_operational": 3, 00:19:36.688 "base_bdevs_list": [ 00:19:36.688 { 00:19:36.688 "name": "BaseBdev1", 00:19:36.688 "uuid": "f1425f28-6bbe-4a78-b2b8-17d86157453e", 00:19:36.688 "is_configured": true, 00:19:36.688 "data_offset": 2048, 00:19:36.688 "data_size": 63488 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "name": "BaseBdev2", 00:19:36.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.688 "is_configured": false, 00:19:36.688 "data_offset": 0, 00:19:36.688 "data_size": 0 00:19:36.688 }, 00:19:36.688 { 00:19:36.688 "name": "BaseBdev3", 00:19:36.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.688 "is_configured": false, 00:19:36.688 "data_offset": 0, 00:19:36.688 "data_size": 0 00:19:36.688 } 00:19:36.688 ] 00:19:36.688 }' 00:19:36.688 13:03:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.688 13:03:40 -- common/autotest_common.sh@10 -- # set +x 00:19:37.622 13:03:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:37.622 [2024-04-17 13:03:41.720303] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:37.622 [2024-04-17 13:03:41.720533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:37.622 13:03:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:37.622 13:03:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:38.189 13:03:42 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:38.189 BaseBdev1 00:19:38.446 13:03:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:38.446 13:03:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:38.446 13:03:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:38.446 13:03:42 -- common/autotest_common.sh@887 -- # local i 00:19:38.446 13:03:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:38.446 13:03:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:38.446 13:03:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.446 13:03:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:38.704 [ 00:19:38.704 { 00:19:38.704 "name": "BaseBdev1", 00:19:38.704 "aliases": [ 00:19:38.704 "d7ad03ca-4c75-4f19-9a51-14f83ba08fdb" 00:19:38.704 ], 00:19:38.704 "product_name": "Malloc disk", 00:19:38.704 "block_size": 512, 00:19:38.704 "num_blocks": 65536, 00:19:38.704 "uuid": "d7ad03ca-4c75-4f19-9a51-14f83ba08fdb", 00:19:38.704 "assigned_rate_limits": { 00:19:38.704 "rw_ios_per_sec": 0, 00:19:38.704 "rw_mbytes_per_sec": 0, 00:19:38.704 "r_mbytes_per_sec": 0, 00:19:38.704 "w_mbytes_per_sec": 0 00:19:38.704 }, 00:19:38.704 "claimed": false, 00:19:38.704 "zoned": false, 00:19:38.704 "supported_io_types": { 00:19:38.704 "read": true, 00:19:38.704 "write": true, 00:19:38.704 "unmap": true, 00:19:38.704 "write_zeroes": true, 00:19:38.704 "flush": true, 00:19:38.704 "reset": true, 00:19:38.704 "compare": false, 00:19:38.704 "compare_and_write": false, 00:19:38.704 "abort": true, 00:19:38.704 "nvme_admin": false, 00:19:38.704 "nvme_io": false 00:19:38.704 }, 00:19:38.704 "memory_domains": [ 00:19:38.704 { 00:19:38.704 "dma_device_id": "system", 00:19:38.704 "dma_device_type": 1 00:19:38.704 }, 00:19:38.704 { 00:19:38.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.704 "dma_device_type": 2 00:19:38.704 } 00:19:38.704 ], 00:19:38.704 "driver_specific": {} 00:19:38.704 } 00:19:38.704 ] 00:19:38.704 13:03:42 -- common/autotest_common.sh@893 -- # return 0 00:19:38.704 13:03:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:38.961 [2024-04-17 13:03:43.050117] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.961 [2024-04-17 13:03:43.053816] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.961 [2024-04-17 13:03:43.054044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.961 [2024-04-17 13:03:43.054149] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.961 [2024-04-17 13:03:43.054302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.961 13:03:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.218 13:03:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.218 "name": "Existed_Raid", 00:19:39.218 "uuid": "dd5ecbdd-2daf-46e7-8245-0cbeda916d3d", 00:19:39.218 "strip_size_kb": 0, 00:19:39.218 "state": "configuring", 00:19:39.218 "raid_level": "raid1", 00:19:39.218 "superblock": true, 00:19:39.218 "num_base_bdevs": 3, 00:19:39.218 "num_base_bdevs_discovered": 1, 00:19:39.218 "num_base_bdevs_operational": 3, 00:19:39.218 "base_bdevs_list": [ 00:19:39.218 { 00:19:39.218 "name": "BaseBdev1", 00:19:39.218 "uuid": "d7ad03ca-4c75-4f19-9a51-14f83ba08fdb", 00:19:39.218 "is_configured": true, 00:19:39.218 "data_offset": 2048, 00:19:39.218 "data_size": 63488 00:19:39.218 }, 00:19:39.218 { 00:19:39.218 "name": "BaseBdev2", 00:19:39.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.218 "is_configured": false, 00:19:39.218 "data_offset": 0, 00:19:39.218 "data_size": 0 00:19:39.218 }, 00:19:39.218 { 00:19:39.218 "name": "BaseBdev3", 00:19:39.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.218 "is_configured": false, 00:19:39.218 "data_offset": 0, 00:19:39.218 "data_size": 0 00:19:39.218 } 00:19:39.218 ] 00:19:39.218 }' 00:19:39.218 13:03:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.218 13:03:43 -- common/autotest_common.sh@10 -- # set +x 00:19:40.150 13:03:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:40.150 [2024-04-17 13:03:44.251144] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.150 BaseBdev2 00:19:40.150 13:03:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:40.150 13:03:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:40.150 13:03:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:40.150 13:03:44 -- common/autotest_common.sh@887 -- # local i 00:19:40.150 13:03:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:40.150 13:03:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:40.150 13:03:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.407 13:03:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:40.665 [ 00:19:40.665 { 00:19:40.665 "name": "BaseBdev2", 00:19:40.665 "aliases": [ 00:19:40.665 "d2652582-0416-40cd-9275-1480ce6e53eb" 00:19:40.665 ], 00:19:40.665 "product_name": "Malloc disk", 00:19:40.665 "block_size": 512, 00:19:40.665 "num_blocks": 65536, 00:19:40.665 "uuid": "d2652582-0416-40cd-9275-1480ce6e53eb", 00:19:40.665 "assigned_rate_limits": { 00:19:40.665 "rw_ios_per_sec": 0, 00:19:40.665 "rw_mbytes_per_sec": 0, 00:19:40.665 "r_mbytes_per_sec": 0, 00:19:40.665 "w_mbytes_per_sec": 0 00:19:40.665 }, 00:19:40.665 "claimed": true, 00:19:40.665 "claim_type": "exclusive_write", 00:19:40.665 "zoned": false, 00:19:40.665 "supported_io_types": { 00:19:40.665 "read": true, 00:19:40.665 "write": true, 00:19:40.665 "unmap": true, 00:19:40.665 "write_zeroes": true, 00:19:40.665 "flush": true, 00:19:40.665 "reset": true, 00:19:40.665 "compare": false, 00:19:40.665 "compare_and_write": false, 00:19:40.665 "abort": true, 00:19:40.665 "nvme_admin": false, 00:19:40.665 "nvme_io": false 00:19:40.665 }, 00:19:40.665 "memory_domains": [ 00:19:40.665 { 00:19:40.665 "dma_device_id": "system", 00:19:40.665 "dma_device_type": 1 00:19:40.665 }, 00:19:40.665 { 00:19:40.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.665 "dma_device_type": 2 00:19:40.665 } 00:19:40.665 ], 00:19:40.665 "driver_specific": {} 00:19:40.665 } 00:19:40.665 ] 00:19:40.665 13:03:44 -- common/autotest_common.sh@893 -- # return 0 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.665 13:03:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.922 13:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.922 "name": "Existed_Raid", 00:19:40.922 "uuid": "dd5ecbdd-2daf-46e7-8245-0cbeda916d3d", 00:19:40.922 "strip_size_kb": 0, 00:19:40.922 "state": "configuring", 00:19:40.922 "raid_level": "raid1", 00:19:40.922 "superblock": true, 00:19:40.922 "num_base_bdevs": 3, 00:19:40.922 "num_base_bdevs_discovered": 2, 00:19:40.922 "num_base_bdevs_operational": 3, 00:19:40.922 "base_bdevs_list": [ 00:19:40.922 { 00:19:40.922 "name": "BaseBdev1", 00:19:40.922 "uuid": "d7ad03ca-4c75-4f19-9a51-14f83ba08fdb", 00:19:40.922 "is_configured": true, 00:19:40.922 "data_offset": 2048, 00:19:40.922 "data_size": 63488 00:19:40.922 }, 00:19:40.922 { 00:19:40.922 "name": "BaseBdev2", 00:19:40.922 "uuid": "d2652582-0416-40cd-9275-1480ce6e53eb", 00:19:40.922 "is_configured": true, 00:19:40.922 "data_offset": 2048, 00:19:40.922 "data_size": 63488 00:19:40.922 }, 00:19:40.922 { 00:19:40.922 "name": "BaseBdev3", 00:19:40.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.922 "is_configured": false, 00:19:40.922 "data_offset": 0, 00:19:40.922 "data_size": 0 00:19:40.922 } 00:19:40.922 ] 00:19:40.922 }' 00:19:40.922 13:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.922 13:03:44 -- common/autotest_common.sh@10 -- # set +x 00:19:41.853 13:03:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:41.853 [2024-04-17 13:03:45.882226] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:41.853 [2024-04-17 13:03:45.882693] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:41.853 [2024-04-17 13:03:45.882828] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:41.853 [2024-04-17 13:03:45.883030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:41.853 BaseBdev3 00:19:41.853 [2024-04-17 13:03:45.883544] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:41.853 [2024-04-17 13:03:45.883672] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:41.853 [2024-04-17 13:03:45.883982] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.853 13:03:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:41.853 13:03:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:41.853 13:03:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:41.853 13:03:45 -- common/autotest_common.sh@887 -- # local i 00:19:41.853 13:03:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:41.853 13:03:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:41.853 13:03:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.110 13:03:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:42.368 [ 00:19:42.368 { 00:19:42.368 "name": "BaseBdev3", 00:19:42.368 "aliases": [ 00:19:42.368 "0813aaf3-38b8-4e36-a730-065e5fcbf9c1" 00:19:42.368 ], 00:19:42.368 "product_name": "Malloc disk", 00:19:42.368 "block_size": 512, 00:19:42.368 "num_blocks": 65536, 00:19:42.368 "uuid": "0813aaf3-38b8-4e36-a730-065e5fcbf9c1", 00:19:42.368 "assigned_rate_limits": { 00:19:42.368 "rw_ios_per_sec": 0, 00:19:42.368 "rw_mbytes_per_sec": 0, 00:19:42.368 "r_mbytes_per_sec": 0, 00:19:42.368 "w_mbytes_per_sec": 0 00:19:42.368 }, 00:19:42.368 "claimed": true, 00:19:42.368 "claim_type": "exclusive_write", 00:19:42.368 "zoned": false, 00:19:42.368 "supported_io_types": { 00:19:42.368 "read": true, 00:19:42.368 "write": true, 00:19:42.368 "unmap": true, 00:19:42.368 "write_zeroes": true, 00:19:42.368 "flush": true, 00:19:42.368 "reset": true, 00:19:42.368 "compare": false, 00:19:42.368 "compare_and_write": false, 00:19:42.368 "abort": true, 00:19:42.368 "nvme_admin": false, 00:19:42.368 "nvme_io": false 00:19:42.368 }, 00:19:42.368 "memory_domains": [ 00:19:42.368 { 00:19:42.368 "dma_device_id": "system", 00:19:42.368 "dma_device_type": 1 00:19:42.368 }, 00:19:42.368 { 00:19:42.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.368 "dma_device_type": 2 00:19:42.368 } 00:19:42.368 ], 00:19:42.368 "driver_specific": {} 00:19:42.368 } 00:19:42.368 ] 00:19:42.368 13:03:46 -- common/autotest_common.sh@893 -- # return 0 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.368 13:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.626 13:03:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.626 "name": "Existed_Raid", 00:19:42.626 "uuid": "dd5ecbdd-2daf-46e7-8245-0cbeda916d3d", 00:19:42.626 "strip_size_kb": 0, 00:19:42.626 "state": "online", 00:19:42.626 "raid_level": "raid1", 00:19:42.626 "superblock": true, 00:19:42.626 "num_base_bdevs": 3, 00:19:42.626 "num_base_bdevs_discovered": 3, 00:19:42.626 "num_base_bdevs_operational": 3, 00:19:42.626 "base_bdevs_list": [ 00:19:42.626 { 00:19:42.626 "name": "BaseBdev1", 00:19:42.626 "uuid": "d7ad03ca-4c75-4f19-9a51-14f83ba08fdb", 00:19:42.626 "is_configured": true, 00:19:42.626 "data_offset": 2048, 00:19:42.626 "data_size": 63488 00:19:42.626 }, 00:19:42.626 { 00:19:42.626 "name": "BaseBdev2", 00:19:42.626 "uuid": "d2652582-0416-40cd-9275-1480ce6e53eb", 00:19:42.626 "is_configured": true, 00:19:42.626 "data_offset": 2048, 00:19:42.626 "data_size": 63488 00:19:42.626 }, 00:19:42.626 { 00:19:42.626 "name": "BaseBdev3", 00:19:42.626 "uuid": "0813aaf3-38b8-4e36-a730-065e5fcbf9c1", 00:19:42.626 "is_configured": true, 00:19:42.626 "data_offset": 2048, 00:19:42.626 "data_size": 63488 00:19:42.626 } 00:19:42.626 ] 00:19:42.626 }' 00:19:42.626 13:03:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.626 13:03:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:43.602 [2024-04-17 13:03:47.550900] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.602 13:03:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.861 13:03:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.861 "name": "Existed_Raid", 00:19:43.861 "uuid": "dd5ecbdd-2daf-46e7-8245-0cbeda916d3d", 00:19:43.861 "strip_size_kb": 0, 00:19:43.862 "state": "online", 00:19:43.862 "raid_level": "raid1", 00:19:43.862 "superblock": true, 00:19:43.862 "num_base_bdevs": 3, 00:19:43.862 "num_base_bdevs_discovered": 2, 00:19:43.862 "num_base_bdevs_operational": 2, 00:19:43.862 "base_bdevs_list": [ 00:19:43.862 { 00:19:43.862 "name": null, 00:19:43.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.862 "is_configured": false, 00:19:43.862 "data_offset": 2048, 00:19:43.862 "data_size": 63488 00:19:43.862 }, 00:19:43.862 { 00:19:43.862 "name": "BaseBdev2", 00:19:43.862 "uuid": "d2652582-0416-40cd-9275-1480ce6e53eb", 00:19:43.862 "is_configured": true, 00:19:43.862 "data_offset": 2048, 00:19:43.862 "data_size": 63488 00:19:43.862 }, 00:19:43.862 { 00:19:43.862 "name": "BaseBdev3", 00:19:43.862 "uuid": "0813aaf3-38b8-4e36-a730-065e5fcbf9c1", 00:19:43.862 "is_configured": true, 00:19:43.862 "data_offset": 2048, 00:19:43.862 "data_size": 63488 00:19:43.862 } 00:19:43.862 ] 00:19:43.862 }' 00:19:43.862 13:03:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.862 13:03:47 -- common/autotest_common.sh@10 -- # set +x 00:19:44.427 13:03:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:44.427 13:03:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:44.427 13:03:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.427 13:03:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:44.685 13:03:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:44.685 13:03:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:44.685 13:03:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:44.944 [2024-04-17 13:03:49.037070] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.204 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:45.204 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:45.204 13:03:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.204 13:03:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:45.461 13:03:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:45.461 13:03:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:45.461 13:03:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:45.461 [2024-04-17 13:03:49.565038] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:45.461 [2024-04-17 13:03:49.565430] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.719 [2024-04-17 13:03:49.647779] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.719 [2024-04-17 13:03:49.648141] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.719 [2024-04-17 13:03:49.648317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:45.719 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:45.719 13:03:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:45.719 13:03:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.719 13:03:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:45.977 13:03:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:45.977 13:03:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:45.977 13:03:49 -- bdev/bdev_raid.sh@287 -- # killprocess 125040 00:19:45.977 13:03:49 -- common/autotest_common.sh@924 -- # '[' -z 125040 ']' 00:19:45.977 13:03:49 -- common/autotest_common.sh@928 -- # kill -0 125040 00:19:45.977 13:03:49 -- common/autotest_common.sh@929 -- # uname 00:19:45.977 13:03:49 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:45.977 13:03:49 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 125040 00:19:45.977 killing process with pid 125040 00:19:45.977 13:03:49 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:45.977 13:03:49 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:45.977 13:03:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 125040' 00:19:45.977 13:03:49 -- common/autotest_common.sh@943 -- # kill 125040 00:19:45.977 13:03:49 -- common/autotest_common.sh@948 -- # wait 125040 00:19:45.977 [2024-04-17 13:03:49.945083] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.977 [2024-04-17 13:03:49.945209] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:47.389 ************************************ 00:19:47.389 END TEST raid_state_function_test_sb 00:19:47.389 ************************************ 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:47.389 00:19:47.389 real 0m14.206s 00:19:47.389 user 0m25.118s 00:19:47.389 sys 0m1.602s 00:19:47.389 13:03:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:19:47.389 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:19:47.389 13:03:51 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:19:47.389 13:03:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:47.389 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:19:47.389 ************************************ 00:19:47.389 START TEST raid_superblock_test 00:19:47.389 ************************************ 00:19:47.389 13:03:51 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid1 3 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:47.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=125473 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:47.389 13:03:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125473 /var/tmp/spdk-raid.sock 00:19:47.389 13:03:51 -- common/autotest_common.sh@817 -- # '[' -z 125473 ']' 00:19:47.389 13:03:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:47.389 13:03:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.389 13:03:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:47.389 13:03:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.389 13:03:51 -- common/autotest_common.sh@10 -- # set +x 00:19:47.389 [2024-04-17 13:03:51.256625] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:19:47.389 [2024-04-17 13:03:51.256994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125473 ] 00:19:47.389 [2024-04-17 13:03:51.424090] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.648 [2024-04-17 13:03:51.630414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.905 [2024-04-17 13:03:51.825849] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:48.161 13:03:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:48.161 13:03:52 -- common/autotest_common.sh@850 -- # return 0 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.161 13:03:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:48.417 malloc1 00:19:48.417 13:03:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:48.675 [2024-04-17 13:03:52.695372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:48.675 [2024-04-17 13:03:52.695684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:48.675 [2024-04-17 13:03:52.695914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:48.675 [2024-04-17 13:03:52.696083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:48.675 [2024-04-17 13:03:52.698766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:48.675 [2024-04-17 13:03:52.698939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:48.675 pt1 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:48.675 13:03:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:48.932 malloc2 00:19:48.932 13:03:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:49.191 [2024-04-17 13:03:53.227293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:49.191 [2024-04-17 13:03:53.227592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.191 [2024-04-17 13:03:53.227677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:49.191 [2024-04-17 13:03:53.228014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.191 [2024-04-17 13:03:53.230648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.191 [2024-04-17 13:03:53.230810] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:49.191 pt2 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:49.191 13:03:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:49.447 malloc3 00:19:49.447 13:03:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:49.704 [2024-04-17 13:03:53.730568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:49.704 [2024-04-17 13:03:53.730821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.704 [2024-04-17 13:03:53.730975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:49.704 [2024-04-17 13:03:53.731111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.704 [2024-04-17 13:03:53.733762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.704 [2024-04-17 13:03:53.733933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:49.704 pt3 00:19:49.704 13:03:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:49.704 13:03:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:49.704 13:03:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:49.961 [2024-04-17 13:03:53.958705] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:49.961 [2024-04-17 13:03:53.961076] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:49.961 [2024-04-17 13:03:53.961269] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:49.961 [2024-04-17 13:03:53.961603] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:49.961 [2024-04-17 13:03:53.961724] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:49.961 [2024-04-17 13:03:53.961926] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:49.961 [2024-04-17 13:03:53.962458] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:49.961 [2024-04-17 13:03:53.962576] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:49.961 [2024-04-17 13:03:53.962914] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.961 13:03:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.218 13:03:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.218 "name": "raid_bdev1", 00:19:50.218 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:19:50.218 "strip_size_kb": 0, 00:19:50.218 "state": "online", 00:19:50.218 "raid_level": "raid1", 00:19:50.218 "superblock": true, 00:19:50.218 "num_base_bdevs": 3, 00:19:50.218 "num_base_bdevs_discovered": 3, 00:19:50.218 "num_base_bdevs_operational": 3, 00:19:50.218 "base_bdevs_list": [ 00:19:50.218 { 00:19:50.218 "name": "pt1", 00:19:50.218 "uuid": "3e65d163-f21c-558f-b817-df4762941f6a", 00:19:50.218 "is_configured": true, 00:19:50.218 "data_offset": 2048, 00:19:50.218 "data_size": 63488 00:19:50.218 }, 00:19:50.218 { 00:19:50.218 "name": "pt2", 00:19:50.218 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:19:50.218 "is_configured": true, 00:19:50.218 "data_offset": 2048, 00:19:50.218 "data_size": 63488 00:19:50.218 }, 00:19:50.218 { 00:19:50.218 "name": "pt3", 00:19:50.218 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:19:50.218 "is_configured": true, 00:19:50.218 "data_offset": 2048, 00:19:50.218 "data_size": 63488 00:19:50.218 } 00:19:50.218 ] 00:19:50.218 }' 00:19:50.218 13:03:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.218 13:03:54 -- common/autotest_common.sh@10 -- # set +x 00:19:50.782 13:03:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:50.782 13:03:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:51.040 [2024-04-17 13:03:55.055384] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.040 13:03:55 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1e5bafe7-6d41-493a-b4b4-de22c992a24b 00:19:51.040 13:03:55 -- bdev/bdev_raid.sh@380 -- # '[' -z 1e5bafe7-6d41-493a-b4b4-de22c992a24b ']' 00:19:51.040 13:03:55 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:51.298 [2024-04-17 13:03:55.331204] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:51.298 [2024-04-17 13:03:55.331451] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.298 [2024-04-17 13:03:55.331690] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.298 [2024-04-17 13:03:55.331932] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.298 [2024-04-17 13:03:55.332055] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:51.298 13:03:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.298 13:03:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:51.556 13:03:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:51.556 13:03:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:51.556 13:03:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:51.556 13:03:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:51.815 13:03:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:51.815 13:03:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:52.073 13:03:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:52.073 13:03:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:52.330 13:03:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:52.330 13:03:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:52.587 13:03:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:52.587 13:03:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:52.587 13:03:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:52.587 13:03:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:52.587 13:03:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.587 13:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:52.587 13:03:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.587 13:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:52.587 13:03:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.587 13:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:52.587 13:03:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:52.587 13:03:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:52.587 13:03:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:52.845 [2024-04-17 13:03:56.935525] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:52.845 [2024-04-17 13:03:56.937881] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:52.845 [2024-04-17 13:03:56.938084] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:52.845 [2024-04-17 13:03:56.938283] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:52.845 [2024-04-17 13:03:56.938502] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:52.845 [2024-04-17 13:03:56.938658] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:52.845 [2024-04-17 13:03:56.938747] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.846 [2024-04-17 13:03:56.938824] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:19:52.846 request: 00:19:52.846 { 00:19:52.846 "name": "raid_bdev1", 00:19:52.846 "raid_level": "raid1", 00:19:52.846 "base_bdevs": [ 00:19:52.846 "malloc1", 00:19:52.846 "malloc2", 00:19:52.846 "malloc3" 00:19:52.846 ], 00:19:52.846 "superblock": false, 00:19:52.846 "method": "bdev_raid_create", 00:19:52.846 "req_id": 1 00:19:52.846 } 00:19:52.846 Got JSON-RPC error response 00:19:52.846 response: 00:19:52.846 { 00:19:52.846 "code": -17, 00:19:52.846 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:52.846 } 00:19:52.846 13:03:56 -- common/autotest_common.sh@641 -- # es=1 00:19:52.846 13:03:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:52.846 13:03:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:52.846 13:03:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:52.846 13:03:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.846 13:03:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:53.104 13:03:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:53.104 13:03:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:53.104 13:03:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:53.363 [2024-04-17 13:03:57.415601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:53.363 [2024-04-17 13:03:57.415834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.363 [2024-04-17 13:03:57.415995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:53.363 [2024-04-17 13:03:57.416173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.363 [2024-04-17 13:03:57.418684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.363 [2024-04-17 13:03:57.418842] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:53.363 [2024-04-17 13:03:57.419071] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:53.363 [2024-04-17 13:03:57.419243] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:53.363 pt1 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.363 13:03:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.621 13:03:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.621 "name": "raid_bdev1", 00:19:53.621 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:19:53.621 "strip_size_kb": 0, 00:19:53.621 "state": "configuring", 00:19:53.621 "raid_level": "raid1", 00:19:53.621 "superblock": true, 00:19:53.621 "num_base_bdevs": 3, 00:19:53.621 "num_base_bdevs_discovered": 1, 00:19:53.621 "num_base_bdevs_operational": 3, 00:19:53.621 "base_bdevs_list": [ 00:19:53.621 { 00:19:53.621 "name": "pt1", 00:19:53.621 "uuid": "3e65d163-f21c-558f-b817-df4762941f6a", 00:19:53.621 "is_configured": true, 00:19:53.621 "data_offset": 2048, 00:19:53.621 "data_size": 63488 00:19:53.621 }, 00:19:53.621 { 00:19:53.621 "name": null, 00:19:53.621 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:19:53.621 "is_configured": false, 00:19:53.621 "data_offset": 2048, 00:19:53.621 "data_size": 63488 00:19:53.621 }, 00:19:53.621 { 00:19:53.621 "name": null, 00:19:53.621 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:19:53.621 "is_configured": false, 00:19:53.621 "data_offset": 2048, 00:19:53.621 "data_size": 63488 00:19:53.621 } 00:19:53.621 ] 00:19:53.621 }' 00:19:53.621 13:03:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.621 13:03:57 -- common/autotest_common.sh@10 -- # set +x 00:19:54.556 13:03:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:54.556 13:03:58 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:54.556 [2024-04-17 13:03:58.663899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:54.556 [2024-04-17 13:03:58.664143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.556 [2024-04-17 13:03:58.664288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:54.556 [2024-04-17 13:03:58.664422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.556 [2024-04-17 13:03:58.665021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.556 [2024-04-17 13:03:58.665166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:54.556 [2024-04-17 13:03:58.665390] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:54.556 [2024-04-17 13:03:58.665523] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:54.556 pt2 00:19:54.556 13:03:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:54.814 [2024-04-17 13:03:58.932031] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.814 13:03:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.379 13:03:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.379 "name": "raid_bdev1", 00:19:55.379 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:19:55.379 "strip_size_kb": 0, 00:19:55.379 "state": "configuring", 00:19:55.379 "raid_level": "raid1", 00:19:55.379 "superblock": true, 00:19:55.379 "num_base_bdevs": 3, 00:19:55.379 "num_base_bdevs_discovered": 1, 00:19:55.379 "num_base_bdevs_operational": 3, 00:19:55.379 "base_bdevs_list": [ 00:19:55.379 { 00:19:55.379 "name": "pt1", 00:19:55.379 "uuid": "3e65d163-f21c-558f-b817-df4762941f6a", 00:19:55.379 "is_configured": true, 00:19:55.379 "data_offset": 2048, 00:19:55.380 "data_size": 63488 00:19:55.380 }, 00:19:55.380 { 00:19:55.380 "name": null, 00:19:55.380 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:19:55.380 "is_configured": false, 00:19:55.380 "data_offset": 2048, 00:19:55.380 "data_size": 63488 00:19:55.380 }, 00:19:55.380 { 00:19:55.380 "name": null, 00:19:55.380 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:19:55.380 "is_configured": false, 00:19:55.380 "data_offset": 2048, 00:19:55.380 "data_size": 63488 00:19:55.380 } 00:19:55.380 ] 00:19:55.380 }' 00:19:55.380 13:03:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.380 13:03:59 -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 13:03:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:55.946 13:03:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:55.946 13:03:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:56.205 [2024-04-17 13:04:00.152266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:56.205 [2024-04-17 13:04:00.152535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.205 [2024-04-17 13:04:00.152693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:56.205 [2024-04-17 13:04:00.152814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.205 [2024-04-17 13:04:00.153406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.205 [2024-04-17 13:04:00.153551] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:56.205 [2024-04-17 13:04:00.153791] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:56.205 [2024-04-17 13:04:00.153923] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:56.205 pt2 00:19:56.205 13:04:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:56.205 13:04:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:56.205 13:04:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:56.462 [2024-04-17 13:04:00.424354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:56.462 [2024-04-17 13:04:00.424628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.462 [2024-04-17 13:04:00.424758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:56.462 [2024-04-17 13:04:00.424884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.462 [2024-04-17 13:04:00.425432] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.462 [2024-04-17 13:04:00.425588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:56.462 [2024-04-17 13:04:00.425824] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:56.462 [2024-04-17 13:04:00.425956] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:56.462 [2024-04-17 13:04:00.426202] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:19:56.462 [2024-04-17 13:04:00.426316] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:56.462 [2024-04-17 13:04:00.426465] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:56.462 [2024-04-17 13:04:00.426916] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:19:56.462 [2024-04-17 13:04:00.427031] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:19:56.462 [2024-04-17 13:04:00.427272] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.462 pt3 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.462 13:04:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.463 13:04:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.463 13:04:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.463 13:04:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.721 13:04:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.721 "name": "raid_bdev1", 00:19:56.721 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:19:56.721 "strip_size_kb": 0, 00:19:56.721 "state": "online", 00:19:56.721 "raid_level": "raid1", 00:19:56.721 "superblock": true, 00:19:56.721 "num_base_bdevs": 3, 00:19:56.721 "num_base_bdevs_discovered": 3, 00:19:56.721 "num_base_bdevs_operational": 3, 00:19:56.721 "base_bdevs_list": [ 00:19:56.721 { 00:19:56.721 "name": "pt1", 00:19:56.721 "uuid": "3e65d163-f21c-558f-b817-df4762941f6a", 00:19:56.721 "is_configured": true, 00:19:56.721 "data_offset": 2048, 00:19:56.721 "data_size": 63488 00:19:56.721 }, 00:19:56.721 { 00:19:56.721 "name": "pt2", 00:19:56.721 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:19:56.721 "is_configured": true, 00:19:56.721 "data_offset": 2048, 00:19:56.721 "data_size": 63488 00:19:56.721 }, 00:19:56.721 { 00:19:56.721 "name": "pt3", 00:19:56.721 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:19:56.721 "is_configured": true, 00:19:56.721 "data_offset": 2048, 00:19:56.721 "data_size": 63488 00:19:56.721 } 00:19:56.721 ] 00:19:56.721 }' 00:19:56.721 13:04:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.721 13:04:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 13:04:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.287 13:04:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:57.544 [2024-04-17 13:04:01.601273] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.544 13:04:01 -- bdev/bdev_raid.sh@430 -- # '[' 1e5bafe7-6d41-493a-b4b4-de22c992a24b '!=' 1e5bafe7-6d41-493a-b4b4-de22c992a24b ']' 00:19:57.544 13:04:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:57.544 13:04:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:57.544 13:04:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:57.544 13:04:01 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:57.801 [2024-04-17 13:04:01.869104] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.802 13:04:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.060 13:04:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.060 "name": "raid_bdev1", 00:19:58.060 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:19:58.060 "strip_size_kb": 0, 00:19:58.060 "state": "online", 00:19:58.060 "raid_level": "raid1", 00:19:58.060 "superblock": true, 00:19:58.060 "num_base_bdevs": 3, 00:19:58.060 "num_base_bdevs_discovered": 2, 00:19:58.060 "num_base_bdevs_operational": 2, 00:19:58.060 "base_bdevs_list": [ 00:19:58.060 { 00:19:58.060 "name": null, 00:19:58.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.060 "is_configured": false, 00:19:58.060 "data_offset": 2048, 00:19:58.060 "data_size": 63488 00:19:58.060 }, 00:19:58.060 { 00:19:58.060 "name": "pt2", 00:19:58.060 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:19:58.060 "is_configured": true, 00:19:58.060 "data_offset": 2048, 00:19:58.060 "data_size": 63488 00:19:58.060 }, 00:19:58.060 { 00:19:58.060 "name": "pt3", 00:19:58.060 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:19:58.060 "is_configured": true, 00:19:58.060 "data_offset": 2048, 00:19:58.060 "data_size": 63488 00:19:58.060 } 00:19:58.060 ] 00:19:58.060 }' 00:19:58.060 13:04:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.060 13:04:02 -- common/autotest_common.sh@10 -- # set +x 00:19:58.994 13:04:02 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.994 [2024-04-17 13:04:03.017304] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.994 [2024-04-17 13:04:03.017558] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.995 [2024-04-17 13:04:03.017803] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.995 [2024-04-17 13:04:03.018002] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.995 [2024-04-17 13:04:03.018151] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:19:58.995 13:04:03 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:58.995 13:04:03 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.253 13:04:03 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:59.253 13:04:03 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:59.253 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:59.253 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:59.253 13:04:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:59.512 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:59.512 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:59.512 13:04:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:59.771 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:59.771 13:04:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:59.771 13:04:03 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:59.771 13:04:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:59.771 13:04:03 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:00.029 [2024-04-17 13:04:04.125534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:00.030 [2024-04-17 13:04:04.126268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.030 [2024-04-17 13:04:04.126571] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:00.030 [2024-04-17 13:04:04.126832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.030 [2024-04-17 13:04:04.129691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.030 [2024-04-17 13:04:04.129984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:00.030 [2024-04-17 13:04:04.130381] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:00.030 [2024-04-17 13:04:04.130580] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.030 pt2 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.030 13:04:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.292 13:04:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.292 "name": "raid_bdev1", 00:20:00.292 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:20:00.292 "strip_size_kb": 0, 00:20:00.292 "state": "configuring", 00:20:00.292 "raid_level": "raid1", 00:20:00.292 "superblock": true, 00:20:00.292 "num_base_bdevs": 3, 00:20:00.292 "num_base_bdevs_discovered": 1, 00:20:00.292 "num_base_bdevs_operational": 2, 00:20:00.292 "base_bdevs_list": [ 00:20:00.292 { 00:20:00.292 "name": null, 00:20:00.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.292 "is_configured": false, 00:20:00.292 "data_offset": 2048, 00:20:00.292 "data_size": 63488 00:20:00.292 }, 00:20:00.292 { 00:20:00.292 "name": "pt2", 00:20:00.293 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:20:00.293 "is_configured": true, 00:20:00.293 "data_offset": 2048, 00:20:00.293 "data_size": 63488 00:20:00.293 }, 00:20:00.293 { 00:20:00.293 "name": null, 00:20:00.293 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:20:00.293 "is_configured": false, 00:20:00.293 "data_offset": 2048, 00:20:00.293 "data_size": 63488 00:20:00.293 } 00:20:00.293 ] 00:20:00.293 }' 00:20:00.293 13:04:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.293 13:04:04 -- common/autotest_common.sh@10 -- # set +x 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@462 -- # i=2 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:01.240 [2024-04-17 13:04:05.302759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:01.240 [2024-04-17 13:04:05.303486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.240 [2024-04-17 13:04:05.303780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:01.240 [2024-04-17 13:04:05.304089] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.240 [2024-04-17 13:04:05.304880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.240 [2024-04-17 13:04:05.305140] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:01.240 [2024-04-17 13:04:05.305506] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:01.240 [2024-04-17 13:04:05.305658] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:01.240 [2024-04-17 13:04:05.305833] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:20:01.240 [2024-04-17 13:04:05.305972] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:01.240 [2024-04-17 13:04:05.306139] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:01.240 [2024-04-17 13:04:05.306624] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:20:01.240 [2024-04-17 13:04:05.306750] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:20:01.240 [2024-04-17 13:04:05.307049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.240 pt3 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.240 13:04:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.498 13:04:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.498 "name": "raid_bdev1", 00:20:01.498 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:20:01.498 "strip_size_kb": 0, 00:20:01.498 "state": "online", 00:20:01.498 "raid_level": "raid1", 00:20:01.498 "superblock": true, 00:20:01.498 "num_base_bdevs": 3, 00:20:01.498 "num_base_bdevs_discovered": 2, 00:20:01.498 "num_base_bdevs_operational": 2, 00:20:01.498 "base_bdevs_list": [ 00:20:01.498 { 00:20:01.498 "name": null, 00:20:01.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.498 "is_configured": false, 00:20:01.498 "data_offset": 2048, 00:20:01.498 "data_size": 63488 00:20:01.498 }, 00:20:01.498 { 00:20:01.498 "name": "pt2", 00:20:01.498 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:20:01.498 "is_configured": true, 00:20:01.498 "data_offset": 2048, 00:20:01.498 "data_size": 63488 00:20:01.498 }, 00:20:01.498 { 00:20:01.498 "name": "pt3", 00:20:01.498 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:20:01.498 "is_configured": true, 00:20:01.498 "data_offset": 2048, 00:20:01.498 "data_size": 63488 00:20:01.498 } 00:20:01.498 ] 00:20:01.498 }' 00:20:01.498 13:04:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.498 13:04:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.432 13:04:06 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:20:02.432 13:04:06 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:02.432 [2024-04-17 13:04:06.567286] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.432 [2024-04-17 13:04:06.567513] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.432 [2024-04-17 13:04:06.567707] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.432 [2024-04-17 13:04:06.567961] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.432 [2024-04-17 13:04:06.568104] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:20:02.709 13:04:06 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.709 13:04:06 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:03.018 13:04:06 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:03.018 13:04:06 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:03.018 13:04:06 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.018 [2024-04-17 13:04:07.123442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.018 [2024-04-17 13:04:07.124184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.018 [2024-04-17 13:04:07.124478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:03.018 [2024-04-17 13:04:07.124734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.018 [2024-04-17 13:04:07.127518] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.018 [2024-04-17 13:04:07.127794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.018 [2024-04-17 13:04:07.128208] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:03.018 [2024-04-17 13:04:07.128385] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:03.018 pt1 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.018 13:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.585 13:04:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.585 "name": "raid_bdev1", 00:20:03.585 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:20:03.585 "strip_size_kb": 0, 00:20:03.585 "state": "configuring", 00:20:03.585 "raid_level": "raid1", 00:20:03.585 "superblock": true, 00:20:03.585 "num_base_bdevs": 3, 00:20:03.585 "num_base_bdevs_discovered": 1, 00:20:03.585 "num_base_bdevs_operational": 3, 00:20:03.585 "base_bdevs_list": [ 00:20:03.585 { 00:20:03.585 "name": "pt1", 00:20:03.585 "uuid": "3e65d163-f21c-558f-b817-df4762941f6a", 00:20:03.585 "is_configured": true, 00:20:03.585 "data_offset": 2048, 00:20:03.585 "data_size": 63488 00:20:03.585 }, 00:20:03.585 { 00:20:03.585 "name": null, 00:20:03.585 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:20:03.585 "is_configured": false, 00:20:03.585 "data_offset": 2048, 00:20:03.585 "data_size": 63488 00:20:03.585 }, 00:20:03.585 { 00:20:03.585 "name": null, 00:20:03.585 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:20:03.585 "is_configured": false, 00:20:03.585 "data_offset": 2048, 00:20:03.585 "data_size": 63488 00:20:03.585 } 00:20:03.585 ] 00:20:03.585 }' 00:20:03.585 13:04:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.585 13:04:07 -- common/autotest_common.sh@10 -- # set +x 00:20:04.150 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:04.150 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:04.150 13:04:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:04.408 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:04.408 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:04.408 13:04:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:04.666 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:04.666 13:04:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:04.666 13:04:08 -- bdev/bdev_raid.sh@489 -- # i=2 00:20:04.666 13:04:08 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:04.925 [2024-04-17 13:04:08.852850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:04.925 [2024-04-17 13:04:08.853411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.925 [2024-04-17 13:04:08.853489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:04.925 [2024-04-17 13:04:08.853752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.925 [2024-04-17 13:04:08.854333] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.925 [2024-04-17 13:04:08.854489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:04.925 [2024-04-17 13:04:08.854721] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:04.925 [2024-04-17 13:04:08.854834] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:04.925 [2024-04-17 13:04:08.854952] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.925 [2024-04-17 13:04:08.855023] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:20:04.925 [2024-04-17 13:04:08.855284] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:04.925 pt3 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.925 13:04:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.184 13:04:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.184 "name": "raid_bdev1", 00:20:05.184 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:20:05.184 "strip_size_kb": 0, 00:20:05.184 "state": "configuring", 00:20:05.184 "raid_level": "raid1", 00:20:05.184 "superblock": true, 00:20:05.184 "num_base_bdevs": 3, 00:20:05.184 "num_base_bdevs_discovered": 1, 00:20:05.184 "num_base_bdevs_operational": 2, 00:20:05.184 "base_bdevs_list": [ 00:20:05.184 { 00:20:05.184 "name": null, 00:20:05.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.184 "is_configured": false, 00:20:05.184 "data_offset": 2048, 00:20:05.184 "data_size": 63488 00:20:05.184 }, 00:20:05.184 { 00:20:05.184 "name": null, 00:20:05.184 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:20:05.184 "is_configured": false, 00:20:05.184 "data_offset": 2048, 00:20:05.184 "data_size": 63488 00:20:05.184 }, 00:20:05.184 { 00:20:05.184 "name": "pt3", 00:20:05.184 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:20:05.184 "is_configured": true, 00:20:05.184 "data_offset": 2048, 00:20:05.184 "data_size": 63488 00:20:05.184 } 00:20:05.184 ] 00:20:05.184 }' 00:20:05.184 13:04:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.184 13:04:09 -- common/autotest_common.sh@10 -- # set +x 00:20:05.751 13:04:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:05.751 13:04:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:05.751 13:04:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.010 [2024-04-17 13:04:10.021098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.010 [2024-04-17 13:04:10.021387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.010 [2024-04-17 13:04:10.021563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:06.010 [2024-04-17 13:04:10.021691] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.010 [2024-04-17 13:04:10.022338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.010 [2024-04-17 13:04:10.022504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.010 [2024-04-17 13:04:10.022715] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:06.010 [2024-04-17 13:04:10.022852] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.010 [2024-04-17 13:04:10.023090] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:20:06.010 [2024-04-17 13:04:10.023212] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:06.010 [2024-04-17 13:04:10.023379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:06.010 [2024-04-17 13:04:10.023863] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:20:06.010 [2024-04-17 13:04:10.024005] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:20:06.010 [2024-04-17 13:04:10.024257] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.010 pt2 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.010 13:04:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.268 13:04:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.268 "name": "raid_bdev1", 00:20:06.268 "uuid": "1e5bafe7-6d41-493a-b4b4-de22c992a24b", 00:20:06.268 "strip_size_kb": 0, 00:20:06.268 "state": "online", 00:20:06.268 "raid_level": "raid1", 00:20:06.268 "superblock": true, 00:20:06.268 "num_base_bdevs": 3, 00:20:06.268 "num_base_bdevs_discovered": 2, 00:20:06.268 "num_base_bdevs_operational": 2, 00:20:06.268 "base_bdevs_list": [ 00:20:06.268 { 00:20:06.268 "name": null, 00:20:06.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.268 "is_configured": false, 00:20:06.268 "data_offset": 2048, 00:20:06.268 "data_size": 63488 00:20:06.268 }, 00:20:06.268 { 00:20:06.268 "name": "pt2", 00:20:06.268 "uuid": "2fe2a46d-c846-51d8-8beb-3c0dd27c80ed", 00:20:06.268 "is_configured": true, 00:20:06.268 "data_offset": 2048, 00:20:06.268 "data_size": 63488 00:20:06.268 }, 00:20:06.268 { 00:20:06.268 "name": "pt3", 00:20:06.268 "uuid": "b2c42b28-bc74-547c-858f-0e9e0091d07c", 00:20:06.268 "is_configured": true, 00:20:06.268 "data_offset": 2048, 00:20:06.268 "data_size": 63488 00:20:06.268 } 00:20:06.268 ] 00:20:06.268 }' 00:20:06.268 13:04:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.268 13:04:10 -- common/autotest_common.sh@10 -- # set +x 00:20:06.834 13:04:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.834 13:04:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:07.092 [2024-04-17 13:04:11.181605] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.092 13:04:11 -- bdev/bdev_raid.sh@506 -- # '[' 1e5bafe7-6d41-493a-b4b4-de22c992a24b '!=' 1e5bafe7-6d41-493a-b4b4-de22c992a24b ']' 00:20:07.092 13:04:11 -- bdev/bdev_raid.sh@511 -- # killprocess 125473 00:20:07.092 13:04:11 -- common/autotest_common.sh@924 -- # '[' -z 125473 ']' 00:20:07.092 13:04:11 -- common/autotest_common.sh@928 -- # kill -0 125473 00:20:07.092 13:04:11 -- common/autotest_common.sh@929 -- # uname 00:20:07.092 13:04:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:07.092 13:04:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 125473 00:20:07.092 13:04:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:07.092 13:04:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:07.092 13:04:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 125473' 00:20:07.092 killing process with pid 125473 00:20:07.092 13:04:11 -- common/autotest_common.sh@943 -- # kill 125473 00:20:07.092 13:04:11 -- common/autotest_common.sh@948 -- # wait 125473 00:20:07.092 [2024-04-17 13:04:11.218560] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.092 [2024-04-17 13:04:11.218658] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.092 [2024-04-17 13:04:11.218725] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.092 [2024-04-17 13:04:11.218750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:20:07.349 [2024-04-17 13:04:11.466904] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.722 ************************************ 00:20:08.722 END TEST raid_superblock_test 00:20:08.722 ************************************ 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:08.722 00:20:08.722 real 0m21.378s 00:20:08.722 user 0m39.411s 00:20:08.722 sys 0m2.346s 00:20:08.722 13:04:12 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:20:08.722 13:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:08.722 13:04:12 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:20:08.722 13:04:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:08.722 13:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:08.722 ************************************ 00:20:08.722 START TEST raid_state_function_test 00:20:08.722 ************************************ 00:20:08.722 13:04:12 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 4 false 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:08.722 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=126139 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126139' 00:20:08.723 Process raid pid: 126139 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:08.723 13:04:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126139 /var/tmp/spdk-raid.sock 00:20:08.723 13:04:12 -- common/autotest_common.sh@817 -- # '[' -z 126139 ']' 00:20:08.723 13:04:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:08.723 13:04:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:08.723 13:04:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:08.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:08.723 13:04:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:08.723 13:04:12 -- common/autotest_common.sh@10 -- # set +x 00:20:08.723 [2024-04-17 13:04:12.722950] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:20:08.723 [2024-04-17 13:04:12.723376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.981 [2024-04-17 13:04:12.886024] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.981 [2024-04-17 13:04:13.091275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.239 [2024-04-17 13:04:13.289870] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.497 13:04:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.497 13:04:13 -- common/autotest_common.sh@850 -- # return 0 00:20:09.497 13:04:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:09.755 [2024-04-17 13:04:13.857347] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.755 [2024-04-17 13:04:13.857637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.755 [2024-04-17 13:04:13.857759] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.755 [2024-04-17 13:04:13.857827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.755 [2024-04-17 13:04:13.857929] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.755 [2024-04-17 13:04:13.858109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.755 [2024-04-17 13:04:13.858222] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:09.755 [2024-04-17 13:04:13.858286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.755 13:04:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.013 13:04:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.013 "name": "Existed_Raid", 00:20:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.013 "strip_size_kb": 64, 00:20:10.013 "state": "configuring", 00:20:10.013 "raid_level": "raid0", 00:20:10.013 "superblock": false, 00:20:10.013 "num_base_bdevs": 4, 00:20:10.013 "num_base_bdevs_discovered": 0, 00:20:10.013 "num_base_bdevs_operational": 4, 00:20:10.013 "base_bdevs_list": [ 00:20:10.013 { 00:20:10.013 "name": "BaseBdev1", 00:20:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.013 "is_configured": false, 00:20:10.013 "data_offset": 0, 00:20:10.013 "data_size": 0 00:20:10.013 }, 00:20:10.013 { 00:20:10.013 "name": "BaseBdev2", 00:20:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.013 "is_configured": false, 00:20:10.013 "data_offset": 0, 00:20:10.013 "data_size": 0 00:20:10.013 }, 00:20:10.013 { 00:20:10.013 "name": "BaseBdev3", 00:20:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.013 "is_configured": false, 00:20:10.013 "data_offset": 0, 00:20:10.013 "data_size": 0 00:20:10.013 }, 00:20:10.013 { 00:20:10.013 "name": "BaseBdev4", 00:20:10.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.013 "is_configured": false, 00:20:10.013 "data_offset": 0, 00:20:10.013 "data_size": 0 00:20:10.013 } 00:20:10.013 ] 00:20:10.013 }' 00:20:10.013 13:04:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.013 13:04:14 -- common/autotest_common.sh@10 -- # set +x 00:20:10.945 13:04:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:10.945 [2024-04-17 13:04:14.965440] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.945 [2024-04-17 13:04:14.965716] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:10.945 13:04:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:11.203 [2024-04-17 13:04:15.193534] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.203 [2024-04-17 13:04:15.193807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.203 [2024-04-17 13:04:15.193929] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.203 [2024-04-17 13:04:15.193996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.203 [2024-04-17 13:04:15.194223] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.203 [2024-04-17 13:04:15.194307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.203 [2024-04-17 13:04:15.194338] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:11.203 [2024-04-17 13:04:15.194466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:11.203 13:04:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.460 [2024-04-17 13:04:15.460790] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.460 BaseBdev1 00:20:11.460 13:04:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:11.460 13:04:15 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:11.460 13:04:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:11.460 13:04:15 -- common/autotest_common.sh@887 -- # local i 00:20:11.460 13:04:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:11.460 13:04:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:11.460 13:04:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.718 13:04:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.975 [ 00:20:11.975 { 00:20:11.975 "name": "BaseBdev1", 00:20:11.975 "aliases": [ 00:20:11.975 "587a885e-53f2-4e76-ac6f-45cf8ad60615" 00:20:11.975 ], 00:20:11.975 "product_name": "Malloc disk", 00:20:11.975 "block_size": 512, 00:20:11.975 "num_blocks": 65536, 00:20:11.975 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:11.975 "assigned_rate_limits": { 00:20:11.975 "rw_ios_per_sec": 0, 00:20:11.975 "rw_mbytes_per_sec": 0, 00:20:11.975 "r_mbytes_per_sec": 0, 00:20:11.975 "w_mbytes_per_sec": 0 00:20:11.975 }, 00:20:11.975 "claimed": true, 00:20:11.975 "claim_type": "exclusive_write", 00:20:11.975 "zoned": false, 00:20:11.975 "supported_io_types": { 00:20:11.975 "read": true, 00:20:11.975 "write": true, 00:20:11.975 "unmap": true, 00:20:11.975 "write_zeroes": true, 00:20:11.975 "flush": true, 00:20:11.975 "reset": true, 00:20:11.975 "compare": false, 00:20:11.975 "compare_and_write": false, 00:20:11.975 "abort": true, 00:20:11.975 "nvme_admin": false, 00:20:11.975 "nvme_io": false 00:20:11.975 }, 00:20:11.975 "memory_domains": [ 00:20:11.975 { 00:20:11.975 "dma_device_id": "system", 00:20:11.975 "dma_device_type": 1 00:20:11.975 }, 00:20:11.975 { 00:20:11.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.975 "dma_device_type": 2 00:20:11.975 } 00:20:11.975 ], 00:20:11.975 "driver_specific": {} 00:20:11.975 } 00:20:11.975 ] 00:20:11.975 13:04:15 -- common/autotest_common.sh@893 -- # return 0 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.975 13:04:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.976 13:04:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.976 13:04:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.976 13:04:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.233 13:04:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.233 "name": "Existed_Raid", 00:20:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.233 "strip_size_kb": 64, 00:20:12.233 "state": "configuring", 00:20:12.233 "raid_level": "raid0", 00:20:12.233 "superblock": false, 00:20:12.233 "num_base_bdevs": 4, 00:20:12.233 "num_base_bdevs_discovered": 1, 00:20:12.233 "num_base_bdevs_operational": 4, 00:20:12.233 "base_bdevs_list": [ 00:20:12.233 { 00:20:12.233 "name": "BaseBdev1", 00:20:12.233 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:12.233 "is_configured": true, 00:20:12.233 "data_offset": 0, 00:20:12.233 "data_size": 65536 00:20:12.233 }, 00:20:12.233 { 00:20:12.233 "name": "BaseBdev2", 00:20:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.233 "is_configured": false, 00:20:12.233 "data_offset": 0, 00:20:12.233 "data_size": 0 00:20:12.233 }, 00:20:12.233 { 00:20:12.233 "name": "BaseBdev3", 00:20:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.233 "is_configured": false, 00:20:12.233 "data_offset": 0, 00:20:12.233 "data_size": 0 00:20:12.233 }, 00:20:12.233 { 00:20:12.233 "name": "BaseBdev4", 00:20:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.233 "is_configured": false, 00:20:12.233 "data_offset": 0, 00:20:12.233 "data_size": 0 00:20:12.233 } 00:20:12.233 ] 00:20:12.233 }' 00:20:12.233 13:04:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.233 13:04:16 -- common/autotest_common.sh@10 -- # set +x 00:20:12.798 13:04:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:13.057 [2024-04-17 13:04:17.125206] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:13.057 [2024-04-17 13:04:17.125462] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:13.057 13:04:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:13.057 13:04:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:13.314 [2024-04-17 13:04:17.349334] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.314 [2024-04-17 13:04:17.351607] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.314 [2024-04-17 13:04:17.351829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.314 [2024-04-17 13:04:17.351964] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.314 [2024-04-17 13:04:17.352033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.314 [2024-04-17 13:04:17.352260] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:13.314 [2024-04-17 13:04:17.352321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.314 13:04:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.573 13:04:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.573 "name": "Existed_Raid", 00:20:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.573 "strip_size_kb": 64, 00:20:13.573 "state": "configuring", 00:20:13.573 "raid_level": "raid0", 00:20:13.573 "superblock": false, 00:20:13.573 "num_base_bdevs": 4, 00:20:13.573 "num_base_bdevs_discovered": 1, 00:20:13.573 "num_base_bdevs_operational": 4, 00:20:13.573 "base_bdevs_list": [ 00:20:13.573 { 00:20:13.573 "name": "BaseBdev1", 00:20:13.573 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:13.573 "is_configured": true, 00:20:13.573 "data_offset": 0, 00:20:13.573 "data_size": 65536 00:20:13.573 }, 00:20:13.573 { 00:20:13.573 "name": "BaseBdev2", 00:20:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.573 "is_configured": false, 00:20:13.573 "data_offset": 0, 00:20:13.573 "data_size": 0 00:20:13.573 }, 00:20:13.573 { 00:20:13.573 "name": "BaseBdev3", 00:20:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.573 "is_configured": false, 00:20:13.573 "data_offset": 0, 00:20:13.573 "data_size": 0 00:20:13.573 }, 00:20:13.573 { 00:20:13.573 "name": "BaseBdev4", 00:20:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.573 "is_configured": false, 00:20:13.573 "data_offset": 0, 00:20:13.573 "data_size": 0 00:20:13.573 } 00:20:13.573 ] 00:20:13.573 }' 00:20:13.573 13:04:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.573 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:20:14.507 13:04:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:14.507 [2024-04-17 13:04:18.632377] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.507 BaseBdev2 00:20:14.507 13:04:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:14.507 13:04:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:14.507 13:04:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:14.507 13:04:18 -- common/autotest_common.sh@887 -- # local i 00:20:14.507 13:04:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:14.507 13:04:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:14.507 13:04:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:14.765 13:04:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:15.023 [ 00:20:15.023 { 00:20:15.023 "name": "BaseBdev2", 00:20:15.023 "aliases": [ 00:20:15.023 "ce030e86-2832-4541-952e-d1a5d2309afa" 00:20:15.023 ], 00:20:15.023 "product_name": "Malloc disk", 00:20:15.023 "block_size": 512, 00:20:15.023 "num_blocks": 65536, 00:20:15.023 "uuid": "ce030e86-2832-4541-952e-d1a5d2309afa", 00:20:15.023 "assigned_rate_limits": { 00:20:15.023 "rw_ios_per_sec": 0, 00:20:15.023 "rw_mbytes_per_sec": 0, 00:20:15.023 "r_mbytes_per_sec": 0, 00:20:15.023 "w_mbytes_per_sec": 0 00:20:15.023 }, 00:20:15.023 "claimed": true, 00:20:15.023 "claim_type": "exclusive_write", 00:20:15.023 "zoned": false, 00:20:15.023 "supported_io_types": { 00:20:15.023 "read": true, 00:20:15.023 "write": true, 00:20:15.023 "unmap": true, 00:20:15.023 "write_zeroes": true, 00:20:15.023 "flush": true, 00:20:15.023 "reset": true, 00:20:15.023 "compare": false, 00:20:15.023 "compare_and_write": false, 00:20:15.023 "abort": true, 00:20:15.023 "nvme_admin": false, 00:20:15.023 "nvme_io": false 00:20:15.023 }, 00:20:15.023 "memory_domains": [ 00:20:15.023 { 00:20:15.023 "dma_device_id": "system", 00:20:15.023 "dma_device_type": 1 00:20:15.023 }, 00:20:15.023 { 00:20:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.023 "dma_device_type": 2 00:20:15.023 } 00:20:15.023 ], 00:20:15.023 "driver_specific": {} 00:20:15.023 } 00:20:15.023 ] 00:20:15.023 13:04:19 -- common/autotest_common.sh@893 -- # return 0 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.023 13:04:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.282 13:04:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.282 "name": "Existed_Raid", 00:20:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.282 "strip_size_kb": 64, 00:20:15.282 "state": "configuring", 00:20:15.282 "raid_level": "raid0", 00:20:15.282 "superblock": false, 00:20:15.282 "num_base_bdevs": 4, 00:20:15.282 "num_base_bdevs_discovered": 2, 00:20:15.282 "num_base_bdevs_operational": 4, 00:20:15.282 "base_bdevs_list": [ 00:20:15.282 { 00:20:15.282 "name": "BaseBdev1", 00:20:15.282 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:15.282 "is_configured": true, 00:20:15.282 "data_offset": 0, 00:20:15.282 "data_size": 65536 00:20:15.282 }, 00:20:15.282 { 00:20:15.282 "name": "BaseBdev2", 00:20:15.282 "uuid": "ce030e86-2832-4541-952e-d1a5d2309afa", 00:20:15.282 "is_configured": true, 00:20:15.282 "data_offset": 0, 00:20:15.282 "data_size": 65536 00:20:15.282 }, 00:20:15.282 { 00:20:15.282 "name": "BaseBdev3", 00:20:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.282 "is_configured": false, 00:20:15.282 "data_offset": 0, 00:20:15.282 "data_size": 0 00:20:15.282 }, 00:20:15.282 { 00:20:15.282 "name": "BaseBdev4", 00:20:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.282 "is_configured": false, 00:20:15.282 "data_offset": 0, 00:20:15.282 "data_size": 0 00:20:15.282 } 00:20:15.282 ] 00:20:15.282 }' 00:20:15.282 13:04:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.282 13:04:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.216 13:04:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:16.216 [2024-04-17 13:04:20.321749] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.216 BaseBdev3 00:20:16.216 13:04:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:16.216 13:04:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:16.216 13:04:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:16.216 13:04:20 -- common/autotest_common.sh@887 -- # local i 00:20:16.216 13:04:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:16.216 13:04:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:16.216 13:04:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.474 13:04:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:16.733 [ 00:20:16.733 { 00:20:16.733 "name": "BaseBdev3", 00:20:16.733 "aliases": [ 00:20:16.733 "f94502c1-e7a1-400e-9273-4b70b61dd3ac" 00:20:16.733 ], 00:20:16.733 "product_name": "Malloc disk", 00:20:16.733 "block_size": 512, 00:20:16.733 "num_blocks": 65536, 00:20:16.733 "uuid": "f94502c1-e7a1-400e-9273-4b70b61dd3ac", 00:20:16.733 "assigned_rate_limits": { 00:20:16.733 "rw_ios_per_sec": 0, 00:20:16.733 "rw_mbytes_per_sec": 0, 00:20:16.733 "r_mbytes_per_sec": 0, 00:20:16.733 "w_mbytes_per_sec": 0 00:20:16.733 }, 00:20:16.733 "claimed": true, 00:20:16.733 "claim_type": "exclusive_write", 00:20:16.733 "zoned": false, 00:20:16.733 "supported_io_types": { 00:20:16.733 "read": true, 00:20:16.733 "write": true, 00:20:16.733 "unmap": true, 00:20:16.733 "write_zeroes": true, 00:20:16.733 "flush": true, 00:20:16.733 "reset": true, 00:20:16.733 "compare": false, 00:20:16.733 "compare_and_write": false, 00:20:16.733 "abort": true, 00:20:16.733 "nvme_admin": false, 00:20:16.733 "nvme_io": false 00:20:16.733 }, 00:20:16.733 "memory_domains": [ 00:20:16.733 { 00:20:16.733 "dma_device_id": "system", 00:20:16.733 "dma_device_type": 1 00:20:16.733 }, 00:20:16.733 { 00:20:16.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.733 "dma_device_type": 2 00:20:16.733 } 00:20:16.733 ], 00:20:16.733 "driver_specific": {} 00:20:16.733 } 00:20:16.733 ] 00:20:16.733 13:04:20 -- common/autotest_common.sh@893 -- # return 0 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.733 13:04:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.992 13:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.992 "name": "Existed_Raid", 00:20:16.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.992 "strip_size_kb": 64, 00:20:16.992 "state": "configuring", 00:20:16.992 "raid_level": "raid0", 00:20:16.992 "superblock": false, 00:20:16.992 "num_base_bdevs": 4, 00:20:16.992 "num_base_bdevs_discovered": 3, 00:20:16.992 "num_base_bdevs_operational": 4, 00:20:16.993 "base_bdevs_list": [ 00:20:16.993 { 00:20:16.993 "name": "BaseBdev1", 00:20:16.993 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:16.993 "is_configured": true, 00:20:16.993 "data_offset": 0, 00:20:16.993 "data_size": 65536 00:20:16.993 }, 00:20:16.993 { 00:20:16.993 "name": "BaseBdev2", 00:20:16.993 "uuid": "ce030e86-2832-4541-952e-d1a5d2309afa", 00:20:16.993 "is_configured": true, 00:20:16.993 "data_offset": 0, 00:20:16.993 "data_size": 65536 00:20:16.993 }, 00:20:16.993 { 00:20:16.993 "name": "BaseBdev3", 00:20:16.993 "uuid": "f94502c1-e7a1-400e-9273-4b70b61dd3ac", 00:20:16.993 "is_configured": true, 00:20:16.993 "data_offset": 0, 00:20:16.993 "data_size": 65536 00:20:16.993 }, 00:20:16.993 { 00:20:16.993 "name": "BaseBdev4", 00:20:16.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.993 "is_configured": false, 00:20:16.993 "data_offset": 0, 00:20:16.993 "data_size": 0 00:20:16.993 } 00:20:16.993 ] 00:20:16.993 }' 00:20:16.993 13:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.993 13:04:21 -- common/autotest_common.sh@10 -- # set +x 00:20:17.927 13:04:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:17.928 [2024-04-17 13:04:22.008032] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:17.928 [2024-04-17 13:04:22.008278] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:17.928 [2024-04-17 13:04:22.008322] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:17.928 [2024-04-17 13:04:22.008562] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:17.928 [2024-04-17 13:04:22.009072] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:17.928 [2024-04-17 13:04:22.009192] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:17.928 [2024-04-17 13:04:22.009562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.928 BaseBdev4 00:20:17.928 13:04:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:17.928 13:04:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:17.928 13:04:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:17.928 13:04:22 -- common/autotest_common.sh@887 -- # local i 00:20:17.928 13:04:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:17.928 13:04:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:17.928 13:04:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:18.186 13:04:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:18.444 [ 00:20:18.444 { 00:20:18.444 "name": "BaseBdev4", 00:20:18.444 "aliases": [ 00:20:18.444 "619c8933-fb03-44e5-968b-cbc2e7fc987a" 00:20:18.444 ], 00:20:18.444 "product_name": "Malloc disk", 00:20:18.444 "block_size": 512, 00:20:18.444 "num_blocks": 65536, 00:20:18.444 "uuid": "619c8933-fb03-44e5-968b-cbc2e7fc987a", 00:20:18.444 "assigned_rate_limits": { 00:20:18.444 "rw_ios_per_sec": 0, 00:20:18.444 "rw_mbytes_per_sec": 0, 00:20:18.444 "r_mbytes_per_sec": 0, 00:20:18.444 "w_mbytes_per_sec": 0 00:20:18.444 }, 00:20:18.444 "claimed": true, 00:20:18.445 "claim_type": "exclusive_write", 00:20:18.445 "zoned": false, 00:20:18.445 "supported_io_types": { 00:20:18.445 "read": true, 00:20:18.445 "write": true, 00:20:18.445 "unmap": true, 00:20:18.445 "write_zeroes": true, 00:20:18.445 "flush": true, 00:20:18.445 "reset": true, 00:20:18.445 "compare": false, 00:20:18.445 "compare_and_write": false, 00:20:18.445 "abort": true, 00:20:18.445 "nvme_admin": false, 00:20:18.445 "nvme_io": false 00:20:18.445 }, 00:20:18.445 "memory_domains": [ 00:20:18.445 { 00:20:18.445 "dma_device_id": "system", 00:20:18.445 "dma_device_type": 1 00:20:18.445 }, 00:20:18.445 { 00:20:18.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.445 "dma_device_type": 2 00:20:18.445 } 00:20:18.445 ], 00:20:18.445 "driver_specific": {} 00:20:18.445 } 00:20:18.445 ] 00:20:18.445 13:04:22 -- common/autotest_common.sh@893 -- # return 0 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.445 13:04:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.703 13:04:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.703 "name": "Existed_Raid", 00:20:18.703 "uuid": "58e5825a-82d0-4103-bd0f-5ee473dee13d", 00:20:18.703 "strip_size_kb": 64, 00:20:18.703 "state": "online", 00:20:18.703 "raid_level": "raid0", 00:20:18.703 "superblock": false, 00:20:18.703 "num_base_bdevs": 4, 00:20:18.703 "num_base_bdevs_discovered": 4, 00:20:18.703 "num_base_bdevs_operational": 4, 00:20:18.703 "base_bdevs_list": [ 00:20:18.703 { 00:20:18.703 "name": "BaseBdev1", 00:20:18.703 "uuid": "587a885e-53f2-4e76-ac6f-45cf8ad60615", 00:20:18.703 "is_configured": true, 00:20:18.703 "data_offset": 0, 00:20:18.703 "data_size": 65536 00:20:18.703 }, 00:20:18.703 { 00:20:18.703 "name": "BaseBdev2", 00:20:18.703 "uuid": "ce030e86-2832-4541-952e-d1a5d2309afa", 00:20:18.703 "is_configured": true, 00:20:18.703 "data_offset": 0, 00:20:18.703 "data_size": 65536 00:20:18.703 }, 00:20:18.703 { 00:20:18.703 "name": "BaseBdev3", 00:20:18.703 "uuid": "f94502c1-e7a1-400e-9273-4b70b61dd3ac", 00:20:18.703 "is_configured": true, 00:20:18.703 "data_offset": 0, 00:20:18.703 "data_size": 65536 00:20:18.703 }, 00:20:18.703 { 00:20:18.703 "name": "BaseBdev4", 00:20:18.703 "uuid": "619c8933-fb03-44e5-968b-cbc2e7fc987a", 00:20:18.703 "is_configured": true, 00:20:18.703 "data_offset": 0, 00:20:18.703 "data_size": 65536 00:20:18.703 } 00:20:18.703 ] 00:20:18.703 }' 00:20:18.703 13:04:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.703 13:04:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.659 13:04:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:19.659 [2024-04-17 13:04:23.648627] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.659 [2024-04-17 13:04:23.648853] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.659 [2024-04-17 13:04:23.649044] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.659 13:04:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:19.659 13:04:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:19.659 13:04:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:19.659 13:04:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.660 13:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.926 13:04:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.927 "name": "Existed_Raid", 00:20:19.927 "uuid": "58e5825a-82d0-4103-bd0f-5ee473dee13d", 00:20:19.927 "strip_size_kb": 64, 00:20:19.927 "state": "offline", 00:20:19.927 "raid_level": "raid0", 00:20:19.927 "superblock": false, 00:20:19.927 "num_base_bdevs": 4, 00:20:19.927 "num_base_bdevs_discovered": 3, 00:20:19.927 "num_base_bdevs_operational": 3, 00:20:19.927 "base_bdevs_list": [ 00:20:19.927 { 00:20:19.927 "name": null, 00:20:19.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.927 "is_configured": false, 00:20:19.927 "data_offset": 0, 00:20:19.927 "data_size": 65536 00:20:19.927 }, 00:20:19.927 { 00:20:19.927 "name": "BaseBdev2", 00:20:19.927 "uuid": "ce030e86-2832-4541-952e-d1a5d2309afa", 00:20:19.927 "is_configured": true, 00:20:19.927 "data_offset": 0, 00:20:19.927 "data_size": 65536 00:20:19.927 }, 00:20:19.927 { 00:20:19.927 "name": "BaseBdev3", 00:20:19.927 "uuid": "f94502c1-e7a1-400e-9273-4b70b61dd3ac", 00:20:19.927 "is_configured": true, 00:20:19.927 "data_offset": 0, 00:20:19.927 "data_size": 65536 00:20:19.927 }, 00:20:19.927 { 00:20:19.927 "name": "BaseBdev4", 00:20:19.927 "uuid": "619c8933-fb03-44e5-968b-cbc2e7fc987a", 00:20:19.927 "is_configured": true, 00:20:19.927 "data_offset": 0, 00:20:19.927 "data_size": 65536 00:20:19.927 } 00:20:19.927 ] 00:20:19.927 }' 00:20:19.927 13:04:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.927 13:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.874 13:04:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:21.132 [2024-04-17 13:04:25.113099] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.132 13:04:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:21.132 13:04:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:21.132 13:04:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:21.132 13:04:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.391 13:04:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:21.391 13:04:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.391 13:04:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:21.648 [2024-04-17 13:04:25.688355] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.648 13:04:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:21.648 13:04:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:21.648 13:04:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.648 13:04:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:21.906 13:04:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:21.906 13:04:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.906 13:04:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:22.164 [2024-04-17 13:04:26.294796] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:22.164 [2024-04-17 13:04:26.295110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:22.422 13:04:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:22.422 13:04:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:22.422 13:04:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:22.422 13:04:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.680 13:04:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:22.680 13:04:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:22.680 13:04:26 -- bdev/bdev_raid.sh@287 -- # killprocess 126139 00:20:22.680 13:04:26 -- common/autotest_common.sh@924 -- # '[' -z 126139 ']' 00:20:22.680 13:04:26 -- common/autotest_common.sh@928 -- # kill -0 126139 00:20:22.680 13:04:26 -- common/autotest_common.sh@929 -- # uname 00:20:22.680 13:04:26 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:22.680 13:04:26 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 126139 00:20:22.680 killing process with pid 126139 00:20:22.680 13:04:26 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:22.680 13:04:26 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:22.680 13:04:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 126139' 00:20:22.680 13:04:26 -- common/autotest_common.sh@943 -- # kill 126139 00:20:22.680 13:04:26 -- common/autotest_common.sh@948 -- # wait 126139 00:20:22.680 [2024-04-17 13:04:26.639188] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.680 [2024-04-17 13:04:26.639363] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.621 ************************************ 00:20:23.621 END TEST raid_state_function_test 00:20:23.621 ************************************ 00:20:23.621 13:04:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:23.621 00:20:23.621 real 0m15.099s 00:20:23.621 user 0m27.073s 00:20:23.621 sys 0m1.653s 00:20:23.621 13:04:27 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:20:23.621 13:04:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:23.879 13:04:27 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:20:23.879 13:04:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:23.879 13:04:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.879 ************************************ 00:20:23.879 START TEST raid_state_function_test_sb 00:20:23.879 ************************************ 00:20:23.879 13:04:27 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid0 4 true 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=126614 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126614' 00:20:23.879 Process raid pid: 126614 00:20:23.879 13:04:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126614 /var/tmp/spdk-raid.sock 00:20:23.879 13:04:27 -- common/autotest_common.sh@817 -- # '[' -z 126614 ']' 00:20:23.879 13:04:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:23.879 13:04:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:23.879 13:04:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:23.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:23.879 13:04:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:23.879 13:04:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.879 [2024-04-17 13:04:27.908315] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:20:23.879 [2024-04-17 13:04:27.908716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.136 [2024-04-17 13:04:28.077814] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.394 [2024-04-17 13:04:28.296817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.394 [2024-04-17 13:04:28.498196] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.962 13:04:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:24.962 13:04:28 -- common/autotest_common.sh@850 -- # return 0 00:20:24.962 13:04:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:25.220 [2024-04-17 13:04:29.209442] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.220 [2024-04-17 13:04:29.210932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.220 [2024-04-17 13:04:29.211065] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.220 [2024-04-17 13:04:29.211206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.220 [2024-04-17 13:04:29.211307] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.220 [2024-04-17 13:04:29.211443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.220 [2024-04-17 13:04:29.211578] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:25.220 [2024-04-17 13:04:29.211643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.220 13:04:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.478 13:04:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.478 "name": "Existed_Raid", 00:20:25.478 "uuid": "314a5e04-02bd-4369-a19b-6fb51b2166e4", 00:20:25.478 "strip_size_kb": 64, 00:20:25.478 "state": "configuring", 00:20:25.478 "raid_level": "raid0", 00:20:25.478 "superblock": true, 00:20:25.478 "num_base_bdevs": 4, 00:20:25.478 "num_base_bdevs_discovered": 0, 00:20:25.478 "num_base_bdevs_operational": 4, 00:20:25.478 "base_bdevs_list": [ 00:20:25.478 { 00:20:25.478 "name": "BaseBdev1", 00:20:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.478 "is_configured": false, 00:20:25.478 "data_offset": 0, 00:20:25.478 "data_size": 0 00:20:25.478 }, 00:20:25.478 { 00:20:25.478 "name": "BaseBdev2", 00:20:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.478 "is_configured": false, 00:20:25.478 "data_offset": 0, 00:20:25.478 "data_size": 0 00:20:25.478 }, 00:20:25.478 { 00:20:25.478 "name": "BaseBdev3", 00:20:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.478 "is_configured": false, 00:20:25.478 "data_offset": 0, 00:20:25.478 "data_size": 0 00:20:25.478 }, 00:20:25.478 { 00:20:25.478 "name": "BaseBdev4", 00:20:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.478 "is_configured": false, 00:20:25.478 "data_offset": 0, 00:20:25.478 "data_size": 0 00:20:25.478 } 00:20:25.478 ] 00:20:25.478 }' 00:20:25.478 13:04:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.478 13:04:29 -- common/autotest_common.sh@10 -- # set +x 00:20:26.413 13:04:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:26.672 [2024-04-17 13:04:30.565591] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:26.672 [2024-04-17 13:04:30.565853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:26.672 13:04:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:26.672 [2024-04-17 13:04:30.813719] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:26.672 [2024-04-17 13:04:30.813982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:26.672 [2024-04-17 13:04:30.814088] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.672 [2024-04-17 13:04:30.814218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.672 [2024-04-17 13:04:30.814317] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:26.672 [2024-04-17 13:04:30.814396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:26.672 [2024-04-17 13:04:30.814559] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:26.672 [2024-04-17 13:04:30.814622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:26.931 13:04:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:27.190 [2024-04-17 13:04:31.094157] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.190 BaseBdev1 00:20:27.190 13:04:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:27.190 13:04:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:27.190 13:04:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:27.190 13:04:31 -- common/autotest_common.sh@887 -- # local i 00:20:27.190 13:04:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:27.190 13:04:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:27.190 13:04:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.190 13:04:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:27.448 [ 00:20:27.448 { 00:20:27.448 "name": "BaseBdev1", 00:20:27.448 "aliases": [ 00:20:27.448 "85c22fcd-769f-4f7a-9c93-7801334326f4" 00:20:27.448 ], 00:20:27.448 "product_name": "Malloc disk", 00:20:27.448 "block_size": 512, 00:20:27.448 "num_blocks": 65536, 00:20:27.448 "uuid": "85c22fcd-769f-4f7a-9c93-7801334326f4", 00:20:27.448 "assigned_rate_limits": { 00:20:27.448 "rw_ios_per_sec": 0, 00:20:27.448 "rw_mbytes_per_sec": 0, 00:20:27.448 "r_mbytes_per_sec": 0, 00:20:27.448 "w_mbytes_per_sec": 0 00:20:27.448 }, 00:20:27.448 "claimed": true, 00:20:27.448 "claim_type": "exclusive_write", 00:20:27.448 "zoned": false, 00:20:27.448 "supported_io_types": { 00:20:27.448 "read": true, 00:20:27.448 "write": true, 00:20:27.448 "unmap": true, 00:20:27.448 "write_zeroes": true, 00:20:27.448 "flush": true, 00:20:27.448 "reset": true, 00:20:27.448 "compare": false, 00:20:27.448 "compare_and_write": false, 00:20:27.448 "abort": true, 00:20:27.448 "nvme_admin": false, 00:20:27.448 "nvme_io": false 00:20:27.448 }, 00:20:27.448 "memory_domains": [ 00:20:27.448 { 00:20:27.448 "dma_device_id": "system", 00:20:27.448 "dma_device_type": 1 00:20:27.448 }, 00:20:27.448 { 00:20:27.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.448 "dma_device_type": 2 00:20:27.448 } 00:20:27.448 ], 00:20:27.448 "driver_specific": {} 00:20:27.448 } 00:20:27.448 ] 00:20:27.448 13:04:31 -- common/autotest_common.sh@893 -- # return 0 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.448 13:04:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.016 13:04:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.016 "name": "Existed_Raid", 00:20:28.016 "uuid": "d6a401d6-849f-4688-860b-b48d65838a5c", 00:20:28.016 "strip_size_kb": 64, 00:20:28.016 "state": "configuring", 00:20:28.016 "raid_level": "raid0", 00:20:28.016 "superblock": true, 00:20:28.016 "num_base_bdevs": 4, 00:20:28.016 "num_base_bdevs_discovered": 1, 00:20:28.016 "num_base_bdevs_operational": 4, 00:20:28.016 "base_bdevs_list": [ 00:20:28.016 { 00:20:28.016 "name": "BaseBdev1", 00:20:28.016 "uuid": "85c22fcd-769f-4f7a-9c93-7801334326f4", 00:20:28.016 "is_configured": true, 00:20:28.016 "data_offset": 2048, 00:20:28.016 "data_size": 63488 00:20:28.016 }, 00:20:28.016 { 00:20:28.016 "name": "BaseBdev2", 00:20:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.016 "is_configured": false, 00:20:28.016 "data_offset": 0, 00:20:28.016 "data_size": 0 00:20:28.016 }, 00:20:28.016 { 00:20:28.016 "name": "BaseBdev3", 00:20:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.016 "is_configured": false, 00:20:28.016 "data_offset": 0, 00:20:28.016 "data_size": 0 00:20:28.016 }, 00:20:28.016 { 00:20:28.016 "name": "BaseBdev4", 00:20:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.016 "is_configured": false, 00:20:28.016 "data_offset": 0, 00:20:28.016 "data_size": 0 00:20:28.016 } 00:20:28.016 ] 00:20:28.016 }' 00:20:28.016 13:04:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.016 13:04:31 -- common/autotest_common.sh@10 -- # set +x 00:20:28.583 13:04:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:28.841 [2024-04-17 13:04:32.822602] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.841 [2024-04-17 13:04:32.822835] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:28.841 13:04:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:28.841 13:04:32 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:29.099 13:04:33 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:29.358 BaseBdev1 00:20:29.358 13:04:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:29.358 13:04:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:29.358 13:04:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:29.358 13:04:33 -- common/autotest_common.sh@887 -- # local i 00:20:29.358 13:04:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:29.358 13:04:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:29.358 13:04:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.617 13:04:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:29.875 [ 00:20:29.875 { 00:20:29.875 "name": "BaseBdev1", 00:20:29.875 "aliases": [ 00:20:29.875 "ea7af7bc-707d-44f3-a91c-5930462030a5" 00:20:29.875 ], 00:20:29.875 "product_name": "Malloc disk", 00:20:29.875 "block_size": 512, 00:20:29.875 "num_blocks": 65536, 00:20:29.875 "uuid": "ea7af7bc-707d-44f3-a91c-5930462030a5", 00:20:29.875 "assigned_rate_limits": { 00:20:29.875 "rw_ios_per_sec": 0, 00:20:29.875 "rw_mbytes_per_sec": 0, 00:20:29.875 "r_mbytes_per_sec": 0, 00:20:29.875 "w_mbytes_per_sec": 0 00:20:29.875 }, 00:20:29.875 "claimed": false, 00:20:29.875 "zoned": false, 00:20:29.875 "supported_io_types": { 00:20:29.875 "read": true, 00:20:29.875 "write": true, 00:20:29.875 "unmap": true, 00:20:29.875 "write_zeroes": true, 00:20:29.875 "flush": true, 00:20:29.875 "reset": true, 00:20:29.875 "compare": false, 00:20:29.875 "compare_and_write": false, 00:20:29.875 "abort": true, 00:20:29.875 "nvme_admin": false, 00:20:29.875 "nvme_io": false 00:20:29.875 }, 00:20:29.875 "memory_domains": [ 00:20:29.875 { 00:20:29.875 "dma_device_id": "system", 00:20:29.875 "dma_device_type": 1 00:20:29.875 }, 00:20:29.875 { 00:20:29.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.875 "dma_device_type": 2 00:20:29.875 } 00:20:29.875 ], 00:20:29.875 "driver_specific": {} 00:20:29.875 } 00:20:29.875 ] 00:20:29.875 13:04:33 -- common/autotest_common.sh@893 -- # return 0 00:20:29.875 13:04:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:30.134 [2024-04-17 13:04:34.180785] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.134 [2024-04-17 13:04:34.183061] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.134 [2024-04-17 13:04:34.183256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.134 [2024-04-17 13:04:34.183377] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.134 [2024-04-17 13:04:34.183444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.134 [2024-04-17 13:04:34.183555] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:30.134 [2024-04-17 13:04:34.183613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.134 13:04:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.392 13:04:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.392 "name": "Existed_Raid", 00:20:30.392 "uuid": "fcb6f445-a7ac-409b-9e97-4b668269bb54", 00:20:30.392 "strip_size_kb": 64, 00:20:30.392 "state": "configuring", 00:20:30.392 "raid_level": "raid0", 00:20:30.392 "superblock": true, 00:20:30.392 "num_base_bdevs": 4, 00:20:30.392 "num_base_bdevs_discovered": 1, 00:20:30.392 "num_base_bdevs_operational": 4, 00:20:30.392 "base_bdevs_list": [ 00:20:30.392 { 00:20:30.392 "name": "BaseBdev1", 00:20:30.392 "uuid": "ea7af7bc-707d-44f3-a91c-5930462030a5", 00:20:30.392 "is_configured": true, 00:20:30.392 "data_offset": 2048, 00:20:30.392 "data_size": 63488 00:20:30.392 }, 00:20:30.392 { 00:20:30.392 "name": "BaseBdev2", 00:20:30.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.392 "is_configured": false, 00:20:30.392 "data_offset": 0, 00:20:30.392 "data_size": 0 00:20:30.392 }, 00:20:30.392 { 00:20:30.392 "name": "BaseBdev3", 00:20:30.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.392 "is_configured": false, 00:20:30.392 "data_offset": 0, 00:20:30.392 "data_size": 0 00:20:30.392 }, 00:20:30.392 { 00:20:30.392 "name": "BaseBdev4", 00:20:30.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.392 "is_configured": false, 00:20:30.392 "data_offset": 0, 00:20:30.392 "data_size": 0 00:20:30.392 } 00:20:30.392 ] 00:20:30.392 }' 00:20:30.392 13:04:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.392 13:04:34 -- common/autotest_common.sh@10 -- # set +x 00:20:31.328 13:04:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:31.328 [2024-04-17 13:04:35.466710] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.328 BaseBdev2 00:20:31.587 13:04:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:31.587 13:04:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:31.587 13:04:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:31.587 13:04:35 -- common/autotest_common.sh@887 -- # local i 00:20:31.587 13:04:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:31.587 13:04:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:31.587 13:04:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.845 13:04:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:32.104 [ 00:20:32.104 { 00:20:32.104 "name": "BaseBdev2", 00:20:32.104 "aliases": [ 00:20:32.104 "3535a2e1-c344-4b1b-9fa3-79582088bbe3" 00:20:32.104 ], 00:20:32.104 "product_name": "Malloc disk", 00:20:32.104 "block_size": 512, 00:20:32.104 "num_blocks": 65536, 00:20:32.104 "uuid": "3535a2e1-c344-4b1b-9fa3-79582088bbe3", 00:20:32.104 "assigned_rate_limits": { 00:20:32.104 "rw_ios_per_sec": 0, 00:20:32.104 "rw_mbytes_per_sec": 0, 00:20:32.104 "r_mbytes_per_sec": 0, 00:20:32.104 "w_mbytes_per_sec": 0 00:20:32.104 }, 00:20:32.104 "claimed": true, 00:20:32.104 "claim_type": "exclusive_write", 00:20:32.104 "zoned": false, 00:20:32.104 "supported_io_types": { 00:20:32.104 "read": true, 00:20:32.104 "write": true, 00:20:32.104 "unmap": true, 00:20:32.104 "write_zeroes": true, 00:20:32.104 "flush": true, 00:20:32.104 "reset": true, 00:20:32.104 "compare": false, 00:20:32.104 "compare_and_write": false, 00:20:32.104 "abort": true, 00:20:32.104 "nvme_admin": false, 00:20:32.104 "nvme_io": false 00:20:32.104 }, 00:20:32.104 "memory_domains": [ 00:20:32.104 { 00:20:32.104 "dma_device_id": "system", 00:20:32.104 "dma_device_type": 1 00:20:32.104 }, 00:20:32.104 { 00:20:32.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.104 "dma_device_type": 2 00:20:32.104 } 00:20:32.104 ], 00:20:32.104 "driver_specific": {} 00:20:32.104 } 00:20:32.104 ] 00:20:32.104 13:04:36 -- common/autotest_common.sh@893 -- # return 0 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.104 13:04:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.363 13:04:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.363 "name": "Existed_Raid", 00:20:32.363 "uuid": "fcb6f445-a7ac-409b-9e97-4b668269bb54", 00:20:32.363 "strip_size_kb": 64, 00:20:32.363 "state": "configuring", 00:20:32.363 "raid_level": "raid0", 00:20:32.363 "superblock": true, 00:20:32.363 "num_base_bdevs": 4, 00:20:32.363 "num_base_bdevs_discovered": 2, 00:20:32.363 "num_base_bdevs_operational": 4, 00:20:32.363 "base_bdevs_list": [ 00:20:32.363 { 00:20:32.363 "name": "BaseBdev1", 00:20:32.363 "uuid": "ea7af7bc-707d-44f3-a91c-5930462030a5", 00:20:32.363 "is_configured": true, 00:20:32.363 "data_offset": 2048, 00:20:32.363 "data_size": 63488 00:20:32.363 }, 00:20:32.363 { 00:20:32.363 "name": "BaseBdev2", 00:20:32.363 "uuid": "3535a2e1-c344-4b1b-9fa3-79582088bbe3", 00:20:32.363 "is_configured": true, 00:20:32.363 "data_offset": 2048, 00:20:32.363 "data_size": 63488 00:20:32.363 }, 00:20:32.363 { 00:20:32.363 "name": "BaseBdev3", 00:20:32.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.363 "is_configured": false, 00:20:32.363 "data_offset": 0, 00:20:32.363 "data_size": 0 00:20:32.363 }, 00:20:32.363 { 00:20:32.363 "name": "BaseBdev4", 00:20:32.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.363 "is_configured": false, 00:20:32.363 "data_offset": 0, 00:20:32.363 "data_size": 0 00:20:32.363 } 00:20:32.363 ] 00:20:32.363 }' 00:20:32.363 13:04:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.363 13:04:36 -- common/autotest_common.sh@10 -- # set +x 00:20:32.930 13:04:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:33.189 [2024-04-17 13:04:37.322124] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:33.189 BaseBdev3 00:20:33.448 13:04:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:33.448 13:04:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:33.448 13:04:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:33.448 13:04:37 -- common/autotest_common.sh@887 -- # local i 00:20:33.448 13:04:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:33.448 13:04:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:33.448 13:04:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.706 13:04:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:33.706 [ 00:20:33.706 { 00:20:33.706 "name": "BaseBdev3", 00:20:33.706 "aliases": [ 00:20:33.706 "8d3d07bd-53f3-4e47-a830-fb5158e02d30" 00:20:33.706 ], 00:20:33.706 "product_name": "Malloc disk", 00:20:33.706 "block_size": 512, 00:20:33.706 "num_blocks": 65536, 00:20:33.706 "uuid": "8d3d07bd-53f3-4e47-a830-fb5158e02d30", 00:20:33.706 "assigned_rate_limits": { 00:20:33.706 "rw_ios_per_sec": 0, 00:20:33.706 "rw_mbytes_per_sec": 0, 00:20:33.706 "r_mbytes_per_sec": 0, 00:20:33.706 "w_mbytes_per_sec": 0 00:20:33.706 }, 00:20:33.706 "claimed": true, 00:20:33.706 "claim_type": "exclusive_write", 00:20:33.706 "zoned": false, 00:20:33.706 "supported_io_types": { 00:20:33.706 "read": true, 00:20:33.706 "write": true, 00:20:33.706 "unmap": true, 00:20:33.706 "write_zeroes": true, 00:20:33.706 "flush": true, 00:20:33.706 "reset": true, 00:20:33.706 "compare": false, 00:20:33.706 "compare_and_write": false, 00:20:33.706 "abort": true, 00:20:33.706 "nvme_admin": false, 00:20:33.706 "nvme_io": false 00:20:33.706 }, 00:20:33.706 "memory_domains": [ 00:20:33.706 { 00:20:33.706 "dma_device_id": "system", 00:20:33.706 "dma_device_type": 1 00:20:33.706 }, 00:20:33.706 { 00:20:33.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.707 "dma_device_type": 2 00:20:33.707 } 00:20:33.707 ], 00:20:33.707 "driver_specific": {} 00:20:33.707 } 00:20:33.707 ] 00:20:33.707 13:04:37 -- common/autotest_common.sh@893 -- # return 0 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.707 13:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.965 13:04:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.965 "name": "Existed_Raid", 00:20:33.965 "uuid": "fcb6f445-a7ac-409b-9e97-4b668269bb54", 00:20:33.965 "strip_size_kb": 64, 00:20:33.965 "state": "configuring", 00:20:33.965 "raid_level": "raid0", 00:20:33.965 "superblock": true, 00:20:33.965 "num_base_bdevs": 4, 00:20:33.965 "num_base_bdevs_discovered": 3, 00:20:33.965 "num_base_bdevs_operational": 4, 00:20:33.965 "base_bdevs_list": [ 00:20:33.965 { 00:20:33.965 "name": "BaseBdev1", 00:20:33.965 "uuid": "ea7af7bc-707d-44f3-a91c-5930462030a5", 00:20:33.965 "is_configured": true, 00:20:33.965 "data_offset": 2048, 00:20:33.965 "data_size": 63488 00:20:33.965 }, 00:20:33.965 { 00:20:33.965 "name": "BaseBdev2", 00:20:33.965 "uuid": "3535a2e1-c344-4b1b-9fa3-79582088bbe3", 00:20:33.965 "is_configured": true, 00:20:33.965 "data_offset": 2048, 00:20:33.965 "data_size": 63488 00:20:33.965 }, 00:20:33.965 { 00:20:33.965 "name": "BaseBdev3", 00:20:33.965 "uuid": "8d3d07bd-53f3-4e47-a830-fb5158e02d30", 00:20:33.965 "is_configured": true, 00:20:33.965 "data_offset": 2048, 00:20:33.965 "data_size": 63488 00:20:33.965 }, 00:20:33.965 { 00:20:33.965 "name": "BaseBdev4", 00:20:33.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.965 "is_configured": false, 00:20:33.965 "data_offset": 0, 00:20:33.965 "data_size": 0 00:20:33.965 } 00:20:33.965 ] 00:20:33.965 }' 00:20:33.965 13:04:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.965 13:04:38 -- common/autotest_common.sh@10 -- # set +x 00:20:34.921 13:04:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:35.180 [2024-04-17 13:04:39.070539] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:35.180 [2024-04-17 13:04:39.070978] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:35.180 [2024-04-17 13:04:39.071118] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:35.180 [2024-04-17 13:04:39.071370] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:35.180 [2024-04-17 13:04:39.071920] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:35.180 [2024-04-17 13:04:39.072055] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:20:35.180 BaseBdev4 00:20:35.180 [2024-04-17 13:04:39.072345] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.180 13:04:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:35.180 13:04:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:35.180 13:04:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:35.180 13:04:39 -- common/autotest_common.sh@887 -- # local i 00:20:35.180 13:04:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:35.180 13:04:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:35.180 13:04:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.438 13:04:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:35.696 [ 00:20:35.696 { 00:20:35.696 "name": "BaseBdev4", 00:20:35.696 "aliases": [ 00:20:35.696 "188c964c-c728-4e80-8bab-2b9a0f382f7a" 00:20:35.696 ], 00:20:35.696 "product_name": "Malloc disk", 00:20:35.696 "block_size": 512, 00:20:35.696 "num_blocks": 65536, 00:20:35.696 "uuid": "188c964c-c728-4e80-8bab-2b9a0f382f7a", 00:20:35.696 "assigned_rate_limits": { 00:20:35.696 "rw_ios_per_sec": 0, 00:20:35.696 "rw_mbytes_per_sec": 0, 00:20:35.696 "r_mbytes_per_sec": 0, 00:20:35.696 "w_mbytes_per_sec": 0 00:20:35.696 }, 00:20:35.696 "claimed": true, 00:20:35.696 "claim_type": "exclusive_write", 00:20:35.696 "zoned": false, 00:20:35.696 "supported_io_types": { 00:20:35.696 "read": true, 00:20:35.696 "write": true, 00:20:35.696 "unmap": true, 00:20:35.696 "write_zeroes": true, 00:20:35.696 "flush": true, 00:20:35.696 "reset": true, 00:20:35.696 "compare": false, 00:20:35.696 "compare_and_write": false, 00:20:35.696 "abort": true, 00:20:35.696 "nvme_admin": false, 00:20:35.696 "nvme_io": false 00:20:35.696 }, 00:20:35.696 "memory_domains": [ 00:20:35.696 { 00:20:35.696 "dma_device_id": "system", 00:20:35.696 "dma_device_type": 1 00:20:35.696 }, 00:20:35.696 { 00:20:35.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.696 "dma_device_type": 2 00:20:35.696 } 00:20:35.696 ], 00:20:35.696 "driver_specific": {} 00:20:35.696 } 00:20:35.696 ] 00:20:35.696 13:04:39 -- common/autotest_common.sh@893 -- # return 0 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.696 13:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.954 13:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.954 "name": "Existed_Raid", 00:20:35.954 "uuid": "fcb6f445-a7ac-409b-9e97-4b668269bb54", 00:20:35.954 "strip_size_kb": 64, 00:20:35.954 "state": "online", 00:20:35.954 "raid_level": "raid0", 00:20:35.954 "superblock": true, 00:20:35.954 "num_base_bdevs": 4, 00:20:35.954 "num_base_bdevs_discovered": 4, 00:20:35.954 "num_base_bdevs_operational": 4, 00:20:35.954 "base_bdevs_list": [ 00:20:35.954 { 00:20:35.954 "name": "BaseBdev1", 00:20:35.954 "uuid": "ea7af7bc-707d-44f3-a91c-5930462030a5", 00:20:35.954 "is_configured": true, 00:20:35.954 "data_offset": 2048, 00:20:35.954 "data_size": 63488 00:20:35.954 }, 00:20:35.954 { 00:20:35.954 "name": "BaseBdev2", 00:20:35.954 "uuid": "3535a2e1-c344-4b1b-9fa3-79582088bbe3", 00:20:35.954 "is_configured": true, 00:20:35.954 "data_offset": 2048, 00:20:35.954 "data_size": 63488 00:20:35.954 }, 00:20:35.954 { 00:20:35.954 "name": "BaseBdev3", 00:20:35.954 "uuid": "8d3d07bd-53f3-4e47-a830-fb5158e02d30", 00:20:35.954 "is_configured": true, 00:20:35.954 "data_offset": 2048, 00:20:35.954 "data_size": 63488 00:20:35.954 }, 00:20:35.954 { 00:20:35.954 "name": "BaseBdev4", 00:20:35.954 "uuid": "188c964c-c728-4e80-8bab-2b9a0f382f7a", 00:20:35.954 "is_configured": true, 00:20:35.954 "data_offset": 2048, 00:20:35.954 "data_size": 63488 00:20:35.954 } 00:20:35.954 ] 00:20:35.954 }' 00:20:35.954 13:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.954 13:04:39 -- common/autotest_common.sh@10 -- # set +x 00:20:36.519 13:04:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:36.777 [2024-04-17 13:04:40.863332] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:36.777 [2024-04-17 13:04:40.863561] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.777 [2024-04-17 13:04:40.863739] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.035 13:04:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.293 13:04:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.293 "name": "Existed_Raid", 00:20:37.293 "uuid": "fcb6f445-a7ac-409b-9e97-4b668269bb54", 00:20:37.293 "strip_size_kb": 64, 00:20:37.293 "state": "offline", 00:20:37.293 "raid_level": "raid0", 00:20:37.293 "superblock": true, 00:20:37.293 "num_base_bdevs": 4, 00:20:37.293 "num_base_bdevs_discovered": 3, 00:20:37.293 "num_base_bdevs_operational": 3, 00:20:37.293 "base_bdevs_list": [ 00:20:37.293 { 00:20:37.293 "name": null, 00:20:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.293 "is_configured": false, 00:20:37.293 "data_offset": 2048, 00:20:37.293 "data_size": 63488 00:20:37.293 }, 00:20:37.293 { 00:20:37.293 "name": "BaseBdev2", 00:20:37.293 "uuid": "3535a2e1-c344-4b1b-9fa3-79582088bbe3", 00:20:37.293 "is_configured": true, 00:20:37.293 "data_offset": 2048, 00:20:37.293 "data_size": 63488 00:20:37.293 }, 00:20:37.293 { 00:20:37.293 "name": "BaseBdev3", 00:20:37.293 "uuid": "8d3d07bd-53f3-4e47-a830-fb5158e02d30", 00:20:37.293 "is_configured": true, 00:20:37.293 "data_offset": 2048, 00:20:37.293 "data_size": 63488 00:20:37.293 }, 00:20:37.293 { 00:20:37.293 "name": "BaseBdev4", 00:20:37.293 "uuid": "188c964c-c728-4e80-8bab-2b9a0f382f7a", 00:20:37.293 "is_configured": true, 00:20:37.293 "data_offset": 2048, 00:20:37.293 "data_size": 63488 00:20:37.293 } 00:20:37.293 ] 00:20:37.293 }' 00:20:37.293 13:04:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.293 13:04:41 -- common/autotest_common.sh@10 -- # set +x 00:20:37.879 13:04:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:37.879 13:04:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:37.879 13:04:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.879 13:04:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:38.446 13:04:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:38.446 13:04:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.446 13:04:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:38.446 [2024-04-17 13:04:42.569709] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.704 13:04:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:38.704 13:04:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:38.704 13:04:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.704 13:04:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:38.962 13:04:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:38.962 13:04:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.962 13:04:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:39.220 [2024-04-17 13:04:43.197235] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:39.220 13:04:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:39.220 13:04:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:39.220 13:04:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.220 13:04:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:39.478 13:04:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:39.478 13:04:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:39.478 13:04:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:39.736 [2024-04-17 13:04:43.800348] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:39.736 [2024-04-17 13:04:43.800569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:39.994 13:04:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:39.994 13:04:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:39.994 13:04:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.994 13:04:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:40.255 13:04:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:40.255 13:04:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:40.256 13:04:44 -- bdev/bdev_raid.sh@287 -- # killprocess 126614 00:20:40.256 13:04:44 -- common/autotest_common.sh@924 -- # '[' -z 126614 ']' 00:20:40.256 13:04:44 -- common/autotest_common.sh@928 -- # kill -0 126614 00:20:40.256 13:04:44 -- common/autotest_common.sh@929 -- # uname 00:20:40.256 13:04:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:40.256 13:04:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 126614 00:20:40.256 killing process with pid 126614 00:20:40.256 13:04:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:40.256 13:04:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:40.256 13:04:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 126614' 00:20:40.256 13:04:44 -- common/autotest_common.sh@943 -- # kill 126614 00:20:40.256 13:04:44 -- common/autotest_common.sh@948 -- # wait 126614 00:20:40.256 [2024-04-17 13:04:44.197165] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:40.256 [2024-04-17 13:04:44.197343] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.206 ************************************ 00:20:41.206 END TEST raid_state_function_test_sb 00:20:41.206 ************************************ 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:41.206 00:20:41.206 real 0m17.386s 00:20:41.206 user 0m31.418s 00:20:41.206 sys 0m1.830s 00:20:41.206 13:04:45 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:20:41.206 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:20:41.206 13:04:45 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:20:41.206 13:04:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:41.206 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:41.206 ************************************ 00:20:41.206 START TEST raid_superblock_test 00:20:41.206 ************************************ 00:20:41.206 13:04:45 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid0 4 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:41.206 13:04:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=127126 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127126 /var/tmp/spdk-raid.sock 00:20:41.207 13:04:45 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:41.207 13:04:45 -- common/autotest_common.sh@817 -- # '[' -z 127126 ']' 00:20:41.207 13:04:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:41.207 13:04:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:41.207 13:04:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:41.207 13:04:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.207 13:04:45 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 [2024-04-17 13:04:45.365230] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:20:41.465 [2024-04-17 13:04:45.365647] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127126 ] 00:20:41.465 [2024-04-17 13:04:45.536347] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.722 [2024-04-17 13:04:45.762844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.980 [2024-04-17 13:04:45.958395] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.238 13:04:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.238 13:04:46 -- common/autotest_common.sh@850 -- # return 0 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:42.238 13:04:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:42.496 malloc1 00:20:42.496 13:04:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:42.755 [2024-04-17 13:04:46.739970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:42.755 [2024-04-17 13:04:46.740461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.755 [2024-04-17 13:04:46.740610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:42.755 [2024-04-17 13:04:46.740761] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.755 [2024-04-17 13:04:46.743337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.755 [2024-04-17 13:04:46.743512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:42.755 pt1 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:42.755 13:04:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:43.016 malloc2 00:20:43.016 13:04:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:43.274 [2024-04-17 13:04:47.318143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:43.274 [2024-04-17 13:04:47.318488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.274 [2024-04-17 13:04:47.318571] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:43.274 [2024-04-17 13:04:47.318859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.274 [2024-04-17 13:04:47.321573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.274 [2024-04-17 13:04:47.321758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:43.274 pt2 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.274 13:04:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:43.533 malloc3 00:20:43.533 13:04:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:43.792 [2024-04-17 13:04:47.852348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:43.792 [2024-04-17 13:04:47.852801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.792 [2024-04-17 13:04:47.852957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:43.792 [2024-04-17 13:04:47.853110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.792 [2024-04-17 13:04:47.855776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.792 [2024-04-17 13:04:47.855957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:43.792 pt3 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:43.792 13:04:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:44.050 malloc4 00:20:44.050 13:04:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:44.308 [2024-04-17 13:04:48.392414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:44.308 [2024-04-17 13:04:48.392800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.308 [2024-04-17 13:04:48.393008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:44.308 [2024-04-17 13:04:48.393151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.308 [2024-04-17 13:04:48.395790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.308 [2024-04-17 13:04:48.395976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:44.308 pt4 00:20:44.308 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:44.308 13:04:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:44.308 13:04:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:44.576 [2024-04-17 13:04:48.612556] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:44.577 [2024-04-17 13:04:48.614957] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:44.577 [2024-04-17 13:04:48.615206] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:44.577 [2024-04-17 13:04:48.615416] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:44.577 [2024-04-17 13:04:48.615791] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:44.577 [2024-04-17 13:04:48.615936] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:44.577 [2024-04-17 13:04:48.616128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:44.577 [2024-04-17 13:04:48.616597] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:44.577 [2024-04-17 13:04:48.616726] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:44.577 [2024-04-17 13:04:48.617028] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.577 13:04:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.844 13:04:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.844 "name": "raid_bdev1", 00:20:44.844 "uuid": "21083263-960f-456e-a1d6-38474244f016", 00:20:44.844 "strip_size_kb": 64, 00:20:44.844 "state": "online", 00:20:44.844 "raid_level": "raid0", 00:20:44.844 "superblock": true, 00:20:44.844 "num_base_bdevs": 4, 00:20:44.844 "num_base_bdevs_discovered": 4, 00:20:44.844 "num_base_bdevs_operational": 4, 00:20:44.844 "base_bdevs_list": [ 00:20:44.844 { 00:20:44.844 "name": "pt1", 00:20:44.844 "uuid": "0b48864d-6a81-5426-a3b1-0f41a5609f33", 00:20:44.844 "is_configured": true, 00:20:44.844 "data_offset": 2048, 00:20:44.844 "data_size": 63488 00:20:44.844 }, 00:20:44.844 { 00:20:44.844 "name": "pt2", 00:20:44.844 "uuid": "bbd5b352-d48b-5c53-b44d-3e914df82e2e", 00:20:44.844 "is_configured": true, 00:20:44.844 "data_offset": 2048, 00:20:44.844 "data_size": 63488 00:20:44.844 }, 00:20:44.844 { 00:20:44.844 "name": "pt3", 00:20:44.844 "uuid": "5d6457e5-e5ac-5446-87fd-ca9fca97616a", 00:20:44.844 "is_configured": true, 00:20:44.844 "data_offset": 2048, 00:20:44.844 "data_size": 63488 00:20:44.844 }, 00:20:44.844 { 00:20:44.844 "name": "pt4", 00:20:44.844 "uuid": "d8163966-ddff-5d5e-a76f-90551a7953d1", 00:20:44.844 "is_configured": true, 00:20:44.844 "data_offset": 2048, 00:20:44.844 "data_size": 63488 00:20:44.844 } 00:20:44.844 ] 00:20:44.844 }' 00:20:44.844 13:04:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.844 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:20:45.779 13:04:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:45.779 13:04:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:45.779 [2024-04-17 13:04:49.861708] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.779 13:04:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=21083263-960f-456e-a1d6-38474244f016 00:20:45.779 13:04:49 -- bdev/bdev_raid.sh@380 -- # '[' -z 21083263-960f-456e-a1d6-38474244f016 ']' 00:20:45.779 13:04:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:46.038 [2024-04-17 13:04:50.097369] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:46.038 [2024-04-17 13:04:50.097550] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.038 [2024-04-17 13:04:50.097758] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.038 [2024-04-17 13:04:50.097942] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.038 [2024-04-17 13:04:50.098046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:46.038 13:04:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:46.038 13:04:50 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.296 13:04:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:46.296 13:04:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:46.296 13:04:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:46.296 13:04:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:46.555 13:04:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:46.555 13:04:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:46.886 13:04:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:46.886 13:04:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:47.146 13:04:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:47.146 13:04:51 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:47.405 13:04:51 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:47.405 13:04:51 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:47.664 13:04:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:47.664 13:04:51 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:47.664 13:04:51 -- common/autotest_common.sh@638 -- # local es=0 00:20:47.664 13:04:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:47.664 13:04:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.664 13:04:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:47.664 13:04:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.664 13:04:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:47.664 13:04:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.664 13:04:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:47.664 13:04:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.664 13:04:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:47.664 13:04:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:47.922 [2024-04-17 13:04:51.949709] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:47.922 [2024-04-17 13:04:51.952138] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:47.922 [2024-04-17 13:04:51.952358] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:47.922 [2024-04-17 13:04:51.952452] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:47.922 [2024-04-17 13:04:51.952603] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:47.922 [2024-04-17 13:04:51.952793] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:47.922 [2024-04-17 13:04:51.952940] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:47.922 [2024-04-17 13:04:51.953041] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:47.922 [2024-04-17 13:04:51.953136] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.922 [2024-04-17 13:04:51.953233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:20:47.922 request: 00:20:47.922 { 00:20:47.922 "name": "raid_bdev1", 00:20:47.922 "raid_level": "raid0", 00:20:47.922 "base_bdevs": [ 00:20:47.922 "malloc1", 00:20:47.922 "malloc2", 00:20:47.922 "malloc3", 00:20:47.922 "malloc4" 00:20:47.922 ], 00:20:47.922 "superblock": false, 00:20:47.922 "strip_size_kb": 64, 00:20:47.922 "method": "bdev_raid_create", 00:20:47.922 "req_id": 1 00:20:47.922 } 00:20:47.922 Got JSON-RPC error response 00:20:47.922 response: 00:20:47.922 { 00:20:47.922 "code": -17, 00:20:47.922 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:47.922 } 00:20:47.922 13:04:51 -- common/autotest_common.sh@641 -- # es=1 00:20:47.922 13:04:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:47.922 13:04:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:47.922 13:04:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:47.922 13:04:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.922 13:04:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:48.181 13:04:52 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:48.181 13:04:52 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:48.181 13:04:52 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:48.440 [2024-04-17 13:04:52.445839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:48.440 [2024-04-17 13:04:52.446153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.440 [2024-04-17 13:04:52.446315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:48.440 [2024-04-17 13:04:52.446457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.440 [2024-04-17 13:04:52.449042] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.440 [2024-04-17 13:04:52.449252] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:48.440 [2024-04-17 13:04:52.449472] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:48.440 [2024-04-17 13:04:52.449652] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:48.440 pt1 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.440 13:04:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.700 13:04:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.700 "name": "raid_bdev1", 00:20:48.700 "uuid": "21083263-960f-456e-a1d6-38474244f016", 00:20:48.700 "strip_size_kb": 64, 00:20:48.700 "state": "configuring", 00:20:48.700 "raid_level": "raid0", 00:20:48.700 "superblock": true, 00:20:48.700 "num_base_bdevs": 4, 00:20:48.700 "num_base_bdevs_discovered": 1, 00:20:48.700 "num_base_bdevs_operational": 4, 00:20:48.700 "base_bdevs_list": [ 00:20:48.700 { 00:20:48.700 "name": "pt1", 00:20:48.700 "uuid": "0b48864d-6a81-5426-a3b1-0f41a5609f33", 00:20:48.700 "is_configured": true, 00:20:48.700 "data_offset": 2048, 00:20:48.700 "data_size": 63488 00:20:48.700 }, 00:20:48.700 { 00:20:48.700 "name": null, 00:20:48.700 "uuid": "bbd5b352-d48b-5c53-b44d-3e914df82e2e", 00:20:48.700 "is_configured": false, 00:20:48.700 "data_offset": 2048, 00:20:48.700 "data_size": 63488 00:20:48.700 }, 00:20:48.700 { 00:20:48.700 "name": null, 00:20:48.700 "uuid": "5d6457e5-e5ac-5446-87fd-ca9fca97616a", 00:20:48.700 "is_configured": false, 00:20:48.700 "data_offset": 2048, 00:20:48.700 "data_size": 63488 00:20:48.700 }, 00:20:48.700 { 00:20:48.700 "name": null, 00:20:48.700 "uuid": "d8163966-ddff-5d5e-a76f-90551a7953d1", 00:20:48.700 "is_configured": false, 00:20:48.700 "data_offset": 2048, 00:20:48.700 "data_size": 63488 00:20:48.700 } 00:20:48.700 ] 00:20:48.700 }' 00:20:48.700 13:04:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.700 13:04:52 -- common/autotest_common.sh@10 -- # set +x 00:20:49.635 13:04:53 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:49.635 13:04:53 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:49.635 [2024-04-17 13:04:53.650303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:49.635 [2024-04-17 13:04:53.650586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.635 [2024-04-17 13:04:53.650668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:49.635 [2024-04-17 13:04:53.650878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.635 [2024-04-17 13:04:53.651430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.635 [2024-04-17 13:04:53.651594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:49.635 [2024-04-17 13:04:53.651846] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:49.636 [2024-04-17 13:04:53.651987] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:49.636 pt2 00:20:49.636 13:04:53 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:49.894 [2024-04-17 13:04:53.918374] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.894 13:04:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.153 13:04:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.153 "name": "raid_bdev1", 00:20:50.153 "uuid": "21083263-960f-456e-a1d6-38474244f016", 00:20:50.153 "strip_size_kb": 64, 00:20:50.153 "state": "configuring", 00:20:50.153 "raid_level": "raid0", 00:20:50.153 "superblock": true, 00:20:50.153 "num_base_bdevs": 4, 00:20:50.153 "num_base_bdevs_discovered": 1, 00:20:50.153 "num_base_bdevs_operational": 4, 00:20:50.153 "base_bdevs_list": [ 00:20:50.153 { 00:20:50.153 "name": "pt1", 00:20:50.153 "uuid": "0b48864d-6a81-5426-a3b1-0f41a5609f33", 00:20:50.153 "is_configured": true, 00:20:50.153 "data_offset": 2048, 00:20:50.153 "data_size": 63488 00:20:50.153 }, 00:20:50.153 { 00:20:50.153 "name": null, 00:20:50.153 "uuid": "bbd5b352-d48b-5c53-b44d-3e914df82e2e", 00:20:50.153 "is_configured": false, 00:20:50.153 "data_offset": 2048, 00:20:50.153 "data_size": 63488 00:20:50.153 }, 00:20:50.153 { 00:20:50.153 "name": null, 00:20:50.153 "uuid": "5d6457e5-e5ac-5446-87fd-ca9fca97616a", 00:20:50.153 "is_configured": false, 00:20:50.153 "data_offset": 2048, 00:20:50.153 "data_size": 63488 00:20:50.153 }, 00:20:50.153 { 00:20:50.153 "name": null, 00:20:50.153 "uuid": "d8163966-ddff-5d5e-a76f-90551a7953d1", 00:20:50.153 "is_configured": false, 00:20:50.153 "data_offset": 2048, 00:20:50.153 "data_size": 63488 00:20:50.153 } 00:20:50.153 ] 00:20:50.153 }' 00:20:50.153 13:04:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.153 13:04:54 -- common/autotest_common.sh@10 -- # set +x 00:20:51.089 13:04:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:51.089 13:04:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:51.089 13:04:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.089 [2024-04-17 13:04:55.122624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.089 [2024-04-17 13:04:55.122892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.089 [2024-04-17 13:04:55.123044] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:51.089 [2024-04-17 13:04:55.123166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.089 [2024-04-17 13:04:55.123779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.089 [2024-04-17 13:04:55.123969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.089 [2024-04-17 13:04:55.124173] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:51.089 [2024-04-17 13:04:55.124304] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.089 pt2 00:20:51.089 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:51.089 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:51.089 13:04:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:51.403 [2024-04-17 13:04:55.394738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:51.403 [2024-04-17 13:04:55.395144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.403 [2024-04-17 13:04:55.395292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:51.403 [2024-04-17 13:04:55.395414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.403 [2024-04-17 13:04:55.396081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.403 [2024-04-17 13:04:55.396278] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:51.403 [2024-04-17 13:04:55.396485] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:51.403 [2024-04-17 13:04:55.396617] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:51.403 pt3 00:20:51.403 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:51.403 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:51.403 13:04:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:51.661 [2024-04-17 13:04:55.654797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:51.661 [2024-04-17 13:04:55.655159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.661 [2024-04-17 13:04:55.655311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:51.661 [2024-04-17 13:04:55.655436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.661 [2024-04-17 13:04:55.656085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.661 [2024-04-17 13:04:55.656257] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:51.661 [2024-04-17 13:04:55.656479] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:51.661 [2024-04-17 13:04:55.656611] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:51.661 [2024-04-17 13:04:55.656865] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:51.661 [2024-04-17 13:04:55.656979] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:51.661 [2024-04-17 13:04:55.657153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:51.661 [2024-04-17 13:04:55.657532] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:51.661 [2024-04-17 13:04:55.657661] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:51.661 [2024-04-17 13:04:55.657907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.661 pt4 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.661 13:04:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.920 13:04:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.920 "name": "raid_bdev1", 00:20:51.920 "uuid": "21083263-960f-456e-a1d6-38474244f016", 00:20:51.920 "strip_size_kb": 64, 00:20:51.920 "state": "online", 00:20:51.920 "raid_level": "raid0", 00:20:51.920 "superblock": true, 00:20:51.920 "num_base_bdevs": 4, 00:20:51.920 "num_base_bdevs_discovered": 4, 00:20:51.920 "num_base_bdevs_operational": 4, 00:20:51.920 "base_bdevs_list": [ 00:20:51.920 { 00:20:51.920 "name": "pt1", 00:20:51.920 "uuid": "0b48864d-6a81-5426-a3b1-0f41a5609f33", 00:20:51.920 "is_configured": true, 00:20:51.920 "data_offset": 2048, 00:20:51.920 "data_size": 63488 00:20:51.920 }, 00:20:51.920 { 00:20:51.920 "name": "pt2", 00:20:51.920 "uuid": "bbd5b352-d48b-5c53-b44d-3e914df82e2e", 00:20:51.920 "is_configured": true, 00:20:51.920 "data_offset": 2048, 00:20:51.920 "data_size": 63488 00:20:51.920 }, 00:20:51.920 { 00:20:51.920 "name": "pt3", 00:20:51.920 "uuid": "5d6457e5-e5ac-5446-87fd-ca9fca97616a", 00:20:51.920 "is_configured": true, 00:20:51.920 "data_offset": 2048, 00:20:51.920 "data_size": 63488 00:20:51.920 }, 00:20:51.920 { 00:20:51.920 "name": "pt4", 00:20:51.920 "uuid": "d8163966-ddff-5d5e-a76f-90551a7953d1", 00:20:51.920 "is_configured": true, 00:20:51.920 "data_offset": 2048, 00:20:51.920 "data_size": 63488 00:20:51.920 } 00:20:51.920 ] 00:20:51.920 }' 00:20:51.920 13:04:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.920 13:04:55 -- common/autotest_common.sh@10 -- # set +x 00:20:52.489 13:04:56 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:52.489 13:04:56 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:52.748 [2024-04-17 13:04:56.859472] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:52.748 13:04:56 -- bdev/bdev_raid.sh@430 -- # '[' 21083263-960f-456e-a1d6-38474244f016 '!=' 21083263-960f-456e-a1d6-38474244f016 ']' 00:20:52.748 13:04:56 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:20:52.748 13:04:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:52.748 13:04:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:52.748 13:04:56 -- bdev/bdev_raid.sh@511 -- # killprocess 127126 00:20:52.748 13:04:56 -- common/autotest_common.sh@924 -- # '[' -z 127126 ']' 00:20:52.748 13:04:56 -- common/autotest_common.sh@928 -- # kill -0 127126 00:20:52.748 13:04:56 -- common/autotest_common.sh@929 -- # uname 00:20:52.748 13:04:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:52.748 13:04:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 127126 00:20:53.006 13:04:56 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:53.006 13:04:56 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:53.006 13:04:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 127126' 00:20:53.006 killing process with pid 127126 00:20:53.006 13:04:56 -- common/autotest_common.sh@943 -- # kill 127126 00:20:53.006 13:04:56 -- common/autotest_common.sh@948 -- # wait 127126 00:20:53.006 [2024-04-17 13:04:56.899960] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.006 [2024-04-17 13:04:56.900056] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.006 [2024-04-17 13:04:56.900132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.006 [2024-04-17 13:04:56.900143] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:53.264 [2024-04-17 13:04:57.231572] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:54.200 13:04:58 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:54.200 00:20:54.200 real 0m13.045s 00:20:54.200 user 0m23.038s 00:20:54.459 sys 0m1.314s 00:20:54.459 13:04:58 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:20:54.459 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:54.459 ************************************ 00:20:54.459 END TEST raid_superblock_test 00:20:54.459 ************************************ 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:20:54.459 13:04:58 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:20:54.459 13:04:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:54.459 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:54.459 ************************************ 00:20:54.459 START TEST raid_state_function_test 00:20:54.459 ************************************ 00:20:54.459 13:04:58 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 4 false 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=127482 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127482' 00:20:54.459 Process raid pid: 127482 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:54.459 13:04:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127482 /var/tmp/spdk-raid.sock 00:20:54.459 13:04:58 -- common/autotest_common.sh@817 -- # '[' -z 127482 ']' 00:20:54.459 13:04:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:54.459 13:04:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:54.459 13:04:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:54.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:54.459 13:04:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:54.459 13:04:58 -- common/autotest_common.sh@10 -- # set +x 00:20:54.459 [2024-04-17 13:04:58.516973] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:20:54.459 [2024-04-17 13:04:58.517474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.717 [2024-04-17 13:04:58.690190] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.976 [2024-04-17 13:04:58.909212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.976 [2024-04-17 13:04:59.101391] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.543 13:04:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.543 13:04:59 -- common/autotest_common.sh@850 -- # return 0 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:55.543 [2024-04-17 13:04:59.601849] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.543 [2024-04-17 13:04:59.602230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.543 [2024-04-17 13:04:59.602337] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.543 [2024-04-17 13:04:59.602394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.543 [2024-04-17 13:04:59.602579] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:55.543 [2024-04-17 13:04:59.602653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:55.543 [2024-04-17 13:04:59.602837] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:55.543 [2024-04-17 13:04:59.602895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.543 13:04:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.802 13:04:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.802 "name": "Existed_Raid", 00:20:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.802 "strip_size_kb": 64, 00:20:55.802 "state": "configuring", 00:20:55.802 "raid_level": "concat", 00:20:55.802 "superblock": false, 00:20:55.802 "num_base_bdevs": 4, 00:20:55.802 "num_base_bdevs_discovered": 0, 00:20:55.802 "num_base_bdevs_operational": 4, 00:20:55.802 "base_bdevs_list": [ 00:20:55.802 { 00:20:55.802 "name": "BaseBdev1", 00:20:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.802 "is_configured": false, 00:20:55.802 "data_offset": 0, 00:20:55.802 "data_size": 0 00:20:55.802 }, 00:20:55.802 { 00:20:55.802 "name": "BaseBdev2", 00:20:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.802 "is_configured": false, 00:20:55.802 "data_offset": 0, 00:20:55.802 "data_size": 0 00:20:55.802 }, 00:20:55.802 { 00:20:55.802 "name": "BaseBdev3", 00:20:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.802 "is_configured": false, 00:20:55.802 "data_offset": 0, 00:20:55.802 "data_size": 0 00:20:55.802 }, 00:20:55.802 { 00:20:55.802 "name": "BaseBdev4", 00:20:55.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.802 "is_configured": false, 00:20:55.802 "data_offset": 0, 00:20:55.802 "data_size": 0 00:20:55.802 } 00:20:55.802 ] 00:20:55.802 }' 00:20:55.802 13:04:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.802 13:04:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.739 13:05:00 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:56.739 [2024-04-17 13:05:00.810033] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:56.739 [2024-04-17 13:05:00.810281] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:56.739 13:05:00 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:56.998 [2024-04-17 13:05:01.054097] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:56.998 [2024-04-17 13:05:01.054446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:56.998 [2024-04-17 13:05:01.054545] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:56.998 [2024-04-17 13:05:01.054671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:56.998 [2024-04-17 13:05:01.054761] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:56.998 [2024-04-17 13:05:01.054921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:56.998 [2024-04-17 13:05:01.055031] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:56.998 [2024-04-17 13:05:01.055092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:56.998 13:05:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.257 [2024-04-17 13:05:01.322745] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.257 BaseBdev1 00:20:57.257 13:05:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:57.257 13:05:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:57.257 13:05:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:57.257 13:05:01 -- common/autotest_common.sh@887 -- # local i 00:20:57.257 13:05:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:57.257 13:05:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:57.257 13:05:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.516 13:05:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:57.775 [ 00:20:57.775 { 00:20:57.775 "name": "BaseBdev1", 00:20:57.775 "aliases": [ 00:20:57.775 "1529ffee-4227-4dae-8e65-1c892bb5cbc1" 00:20:57.775 ], 00:20:57.775 "product_name": "Malloc disk", 00:20:57.775 "block_size": 512, 00:20:57.775 "num_blocks": 65536, 00:20:57.775 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:20:57.775 "assigned_rate_limits": { 00:20:57.775 "rw_ios_per_sec": 0, 00:20:57.775 "rw_mbytes_per_sec": 0, 00:20:57.775 "r_mbytes_per_sec": 0, 00:20:57.775 "w_mbytes_per_sec": 0 00:20:57.775 }, 00:20:57.775 "claimed": true, 00:20:57.775 "claim_type": "exclusive_write", 00:20:57.775 "zoned": false, 00:20:57.775 "supported_io_types": { 00:20:57.775 "read": true, 00:20:57.775 "write": true, 00:20:57.775 "unmap": true, 00:20:57.775 "write_zeroes": true, 00:20:57.775 "flush": true, 00:20:57.775 "reset": true, 00:20:57.775 "compare": false, 00:20:57.775 "compare_and_write": false, 00:20:57.775 "abort": true, 00:20:57.775 "nvme_admin": false, 00:20:57.776 "nvme_io": false 00:20:57.776 }, 00:20:57.776 "memory_domains": [ 00:20:57.776 { 00:20:57.776 "dma_device_id": "system", 00:20:57.776 "dma_device_type": 1 00:20:57.776 }, 00:20:57.776 { 00:20:57.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.776 "dma_device_type": 2 00:20:57.776 } 00:20:57.776 ], 00:20:57.776 "driver_specific": {} 00:20:57.776 } 00:20:57.776 ] 00:20:57.776 13:05:01 -- common/autotest_common.sh@893 -- # return 0 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.776 13:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.034 13:05:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.034 "name": "Existed_Raid", 00:20:58.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.034 "strip_size_kb": 64, 00:20:58.034 "state": "configuring", 00:20:58.034 "raid_level": "concat", 00:20:58.034 "superblock": false, 00:20:58.034 "num_base_bdevs": 4, 00:20:58.034 "num_base_bdevs_discovered": 1, 00:20:58.034 "num_base_bdevs_operational": 4, 00:20:58.034 "base_bdevs_list": [ 00:20:58.034 { 00:20:58.034 "name": "BaseBdev1", 00:20:58.035 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:20:58.035 "is_configured": true, 00:20:58.035 "data_offset": 0, 00:20:58.035 "data_size": 65536 00:20:58.035 }, 00:20:58.035 { 00:20:58.035 "name": "BaseBdev2", 00:20:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.035 "is_configured": false, 00:20:58.035 "data_offset": 0, 00:20:58.035 "data_size": 0 00:20:58.035 }, 00:20:58.035 { 00:20:58.035 "name": "BaseBdev3", 00:20:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.035 "is_configured": false, 00:20:58.035 "data_offset": 0, 00:20:58.035 "data_size": 0 00:20:58.035 }, 00:20:58.035 { 00:20:58.035 "name": "BaseBdev4", 00:20:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.035 "is_configured": false, 00:20:58.035 "data_offset": 0, 00:20:58.035 "data_size": 0 00:20:58.035 } 00:20:58.035 ] 00:20:58.035 }' 00:20:58.035 13:05:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.035 13:05:02 -- common/autotest_common.sh@10 -- # set +x 00:20:58.970 13:05:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:58.970 [2024-04-17 13:05:02.979226] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:58.970 [2024-04-17 13:05:02.979539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:58.970 13:05:02 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:58.970 13:05:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:59.230 [2024-04-17 13:05:03.263433] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.230 [2024-04-17 13:05:03.265784] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.230 [2024-04-17 13:05:03.266013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.230 [2024-04-17 13:05:03.266155] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.230 [2024-04-17 13:05:03.266216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.230 [2024-04-17 13:05:03.266310] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:59.230 [2024-04-17 13:05:03.266363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.230 13:05:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.489 13:05:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.489 "name": "Existed_Raid", 00:20:59.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.489 "strip_size_kb": 64, 00:20:59.489 "state": "configuring", 00:20:59.489 "raid_level": "concat", 00:20:59.489 "superblock": false, 00:20:59.489 "num_base_bdevs": 4, 00:20:59.489 "num_base_bdevs_discovered": 1, 00:20:59.489 "num_base_bdevs_operational": 4, 00:20:59.489 "base_bdevs_list": [ 00:20:59.489 { 00:20:59.489 "name": "BaseBdev1", 00:20:59.489 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:20:59.489 "is_configured": true, 00:20:59.489 "data_offset": 0, 00:20:59.489 "data_size": 65536 00:20:59.489 }, 00:20:59.489 { 00:20:59.489 "name": "BaseBdev2", 00:20:59.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.489 "is_configured": false, 00:20:59.489 "data_offset": 0, 00:20:59.489 "data_size": 0 00:20:59.489 }, 00:20:59.489 { 00:20:59.489 "name": "BaseBdev3", 00:20:59.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.489 "is_configured": false, 00:20:59.489 "data_offset": 0, 00:20:59.489 "data_size": 0 00:20:59.489 }, 00:20:59.489 { 00:20:59.489 "name": "BaseBdev4", 00:20:59.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.489 "is_configured": false, 00:20:59.489 "data_offset": 0, 00:20:59.489 "data_size": 0 00:20:59.490 } 00:20:59.490 ] 00:20:59.490 }' 00:20:59.490 13:05:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.490 13:05:03 -- common/autotest_common.sh@10 -- # set +x 00:21:00.462 13:05:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:00.462 [2024-04-17 13:05:04.501155] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:00.462 BaseBdev2 00:21:00.462 13:05:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:00.462 13:05:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:21:00.462 13:05:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:00.462 13:05:04 -- common/autotest_common.sh@887 -- # local i 00:21:00.462 13:05:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:00.462 13:05:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:00.462 13:05:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:00.723 13:05:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:00.986 [ 00:21:00.986 { 00:21:00.986 "name": "BaseBdev2", 00:21:00.986 "aliases": [ 00:21:00.986 "2c75d1ae-6b95-48ea-b63d-668f7868cabc" 00:21:00.986 ], 00:21:00.986 "product_name": "Malloc disk", 00:21:00.986 "block_size": 512, 00:21:00.986 "num_blocks": 65536, 00:21:00.986 "uuid": "2c75d1ae-6b95-48ea-b63d-668f7868cabc", 00:21:00.986 "assigned_rate_limits": { 00:21:00.986 "rw_ios_per_sec": 0, 00:21:00.986 "rw_mbytes_per_sec": 0, 00:21:00.986 "r_mbytes_per_sec": 0, 00:21:00.986 "w_mbytes_per_sec": 0 00:21:00.986 }, 00:21:00.986 "claimed": true, 00:21:00.986 "claim_type": "exclusive_write", 00:21:00.986 "zoned": false, 00:21:00.986 "supported_io_types": { 00:21:00.986 "read": true, 00:21:00.986 "write": true, 00:21:00.986 "unmap": true, 00:21:00.986 "write_zeroes": true, 00:21:00.986 "flush": true, 00:21:00.986 "reset": true, 00:21:00.986 "compare": false, 00:21:00.986 "compare_and_write": false, 00:21:00.986 "abort": true, 00:21:00.986 "nvme_admin": false, 00:21:00.986 "nvme_io": false 00:21:00.986 }, 00:21:00.986 "memory_domains": [ 00:21:00.986 { 00:21:00.986 "dma_device_id": "system", 00:21:00.986 "dma_device_type": 1 00:21:00.986 }, 00:21:00.986 { 00:21:00.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:00.986 "dma_device_type": 2 00:21:00.986 } 00:21:00.986 ], 00:21:00.986 "driver_specific": {} 00:21:00.986 } 00:21:00.986 ] 00:21:00.986 13:05:05 -- common/autotest_common.sh@893 -- # return 0 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.986 13:05:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.245 13:05:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:01.245 "name": "Existed_Raid", 00:21:01.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.245 "strip_size_kb": 64, 00:21:01.245 "state": "configuring", 00:21:01.245 "raid_level": "concat", 00:21:01.245 "superblock": false, 00:21:01.245 "num_base_bdevs": 4, 00:21:01.245 "num_base_bdevs_discovered": 2, 00:21:01.245 "num_base_bdevs_operational": 4, 00:21:01.245 "base_bdevs_list": [ 00:21:01.245 { 00:21:01.245 "name": "BaseBdev1", 00:21:01.245 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:21:01.245 "is_configured": true, 00:21:01.245 "data_offset": 0, 00:21:01.245 "data_size": 65536 00:21:01.245 }, 00:21:01.245 { 00:21:01.245 "name": "BaseBdev2", 00:21:01.245 "uuid": "2c75d1ae-6b95-48ea-b63d-668f7868cabc", 00:21:01.245 "is_configured": true, 00:21:01.245 "data_offset": 0, 00:21:01.245 "data_size": 65536 00:21:01.245 }, 00:21:01.245 { 00:21:01.245 "name": "BaseBdev3", 00:21:01.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.245 "is_configured": false, 00:21:01.245 "data_offset": 0, 00:21:01.245 "data_size": 0 00:21:01.245 }, 00:21:01.245 { 00:21:01.245 "name": "BaseBdev4", 00:21:01.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.245 "is_configured": false, 00:21:01.245 "data_offset": 0, 00:21:01.245 "data_size": 0 00:21:01.245 } 00:21:01.245 ] 00:21:01.245 }' 00:21:01.245 13:05:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:01.245 13:05:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.182 13:05:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:02.182 [2024-04-17 13:05:06.222831] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:02.182 BaseBdev3 00:21:02.182 13:05:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:02.182 13:05:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:21:02.182 13:05:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:02.182 13:05:06 -- common/autotest_common.sh@887 -- # local i 00:21:02.182 13:05:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:02.182 13:05:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:02.182 13:05:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:02.457 13:05:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:02.725 [ 00:21:02.726 { 00:21:02.726 "name": "BaseBdev3", 00:21:02.726 "aliases": [ 00:21:02.726 "af291139-c57d-4be7-94ce-0d825c9bf2ff" 00:21:02.726 ], 00:21:02.726 "product_name": "Malloc disk", 00:21:02.726 "block_size": 512, 00:21:02.726 "num_blocks": 65536, 00:21:02.726 "uuid": "af291139-c57d-4be7-94ce-0d825c9bf2ff", 00:21:02.726 "assigned_rate_limits": { 00:21:02.726 "rw_ios_per_sec": 0, 00:21:02.726 "rw_mbytes_per_sec": 0, 00:21:02.726 "r_mbytes_per_sec": 0, 00:21:02.726 "w_mbytes_per_sec": 0 00:21:02.726 }, 00:21:02.726 "claimed": true, 00:21:02.726 "claim_type": "exclusive_write", 00:21:02.726 "zoned": false, 00:21:02.726 "supported_io_types": { 00:21:02.726 "read": true, 00:21:02.726 "write": true, 00:21:02.726 "unmap": true, 00:21:02.726 "write_zeroes": true, 00:21:02.726 "flush": true, 00:21:02.726 "reset": true, 00:21:02.726 "compare": false, 00:21:02.726 "compare_and_write": false, 00:21:02.726 "abort": true, 00:21:02.726 "nvme_admin": false, 00:21:02.726 "nvme_io": false 00:21:02.726 }, 00:21:02.726 "memory_domains": [ 00:21:02.726 { 00:21:02.726 "dma_device_id": "system", 00:21:02.726 "dma_device_type": 1 00:21:02.726 }, 00:21:02.726 { 00:21:02.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.726 "dma_device_type": 2 00:21:02.726 } 00:21:02.726 ], 00:21:02.726 "driver_specific": {} 00:21:02.726 } 00:21:02.726 ] 00:21:02.726 13:05:06 -- common/autotest_common.sh@893 -- # return 0 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.726 13:05:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.985 13:05:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.985 "name": "Existed_Raid", 00:21:02.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.985 "strip_size_kb": 64, 00:21:02.985 "state": "configuring", 00:21:02.985 "raid_level": "concat", 00:21:02.985 "superblock": false, 00:21:02.985 "num_base_bdevs": 4, 00:21:02.985 "num_base_bdevs_discovered": 3, 00:21:02.985 "num_base_bdevs_operational": 4, 00:21:02.985 "base_bdevs_list": [ 00:21:02.985 { 00:21:02.985 "name": "BaseBdev1", 00:21:02.985 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:21:02.985 "is_configured": true, 00:21:02.985 "data_offset": 0, 00:21:02.985 "data_size": 65536 00:21:02.985 }, 00:21:02.985 { 00:21:02.985 "name": "BaseBdev2", 00:21:02.985 "uuid": "2c75d1ae-6b95-48ea-b63d-668f7868cabc", 00:21:02.985 "is_configured": true, 00:21:02.985 "data_offset": 0, 00:21:02.985 "data_size": 65536 00:21:02.985 }, 00:21:02.985 { 00:21:02.985 "name": "BaseBdev3", 00:21:02.985 "uuid": "af291139-c57d-4be7-94ce-0d825c9bf2ff", 00:21:02.985 "is_configured": true, 00:21:02.985 "data_offset": 0, 00:21:02.985 "data_size": 65536 00:21:02.985 }, 00:21:02.985 { 00:21:02.985 "name": "BaseBdev4", 00:21:02.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.985 "is_configured": false, 00:21:02.985 "data_offset": 0, 00:21:02.985 "data_size": 0 00:21:02.985 } 00:21:02.985 ] 00:21:02.985 }' 00:21:02.985 13:05:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.985 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.922 13:05:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:04.180 [2024-04-17 13:05:08.088902] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:04.180 [2024-04-17 13:05:08.089151] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:04.180 [2024-04-17 13:05:08.089198] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:04.180 [2024-04-17 13:05:08.089442] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:04.180 [2024-04-17 13:05:08.089938] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:04.180 [2024-04-17 13:05:08.090060] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:04.180 [2024-04-17 13:05:08.090431] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.180 BaseBdev4 00:21:04.180 13:05:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:04.180 13:05:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:21:04.180 13:05:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:04.180 13:05:08 -- common/autotest_common.sh@887 -- # local i 00:21:04.180 13:05:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:04.180 13:05:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:04.180 13:05:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.439 13:05:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:04.721 [ 00:21:04.721 { 00:21:04.721 "name": "BaseBdev4", 00:21:04.721 "aliases": [ 00:21:04.721 "159acbed-8c7f-430a-ac43-313410262b17" 00:21:04.721 ], 00:21:04.721 "product_name": "Malloc disk", 00:21:04.721 "block_size": 512, 00:21:04.721 "num_blocks": 65536, 00:21:04.721 "uuid": "159acbed-8c7f-430a-ac43-313410262b17", 00:21:04.721 "assigned_rate_limits": { 00:21:04.721 "rw_ios_per_sec": 0, 00:21:04.721 "rw_mbytes_per_sec": 0, 00:21:04.721 "r_mbytes_per_sec": 0, 00:21:04.721 "w_mbytes_per_sec": 0 00:21:04.721 }, 00:21:04.721 "claimed": true, 00:21:04.721 "claim_type": "exclusive_write", 00:21:04.721 "zoned": false, 00:21:04.721 "supported_io_types": { 00:21:04.721 "read": true, 00:21:04.721 "write": true, 00:21:04.721 "unmap": true, 00:21:04.721 "write_zeroes": true, 00:21:04.721 "flush": true, 00:21:04.721 "reset": true, 00:21:04.721 "compare": false, 00:21:04.721 "compare_and_write": false, 00:21:04.721 "abort": true, 00:21:04.721 "nvme_admin": false, 00:21:04.721 "nvme_io": false 00:21:04.721 }, 00:21:04.721 "memory_domains": [ 00:21:04.721 { 00:21:04.721 "dma_device_id": "system", 00:21:04.721 "dma_device_type": 1 00:21:04.721 }, 00:21:04.721 { 00:21:04.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.721 "dma_device_type": 2 00:21:04.721 } 00:21:04.721 ], 00:21:04.721 "driver_specific": {} 00:21:04.721 } 00:21:04.721 ] 00:21:04.721 13:05:08 -- common/autotest_common.sh@893 -- # return 0 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.721 "name": "Existed_Raid", 00:21:04.721 "uuid": "96706ff6-519f-432c-af96-c2de3d235c6d", 00:21:04.721 "strip_size_kb": 64, 00:21:04.721 "state": "online", 00:21:04.721 "raid_level": "concat", 00:21:04.721 "superblock": false, 00:21:04.721 "num_base_bdevs": 4, 00:21:04.721 "num_base_bdevs_discovered": 4, 00:21:04.721 "num_base_bdevs_operational": 4, 00:21:04.721 "base_bdevs_list": [ 00:21:04.721 { 00:21:04.721 "name": "BaseBdev1", 00:21:04.721 "uuid": "1529ffee-4227-4dae-8e65-1c892bb5cbc1", 00:21:04.721 "is_configured": true, 00:21:04.721 "data_offset": 0, 00:21:04.721 "data_size": 65536 00:21:04.721 }, 00:21:04.721 { 00:21:04.721 "name": "BaseBdev2", 00:21:04.721 "uuid": "2c75d1ae-6b95-48ea-b63d-668f7868cabc", 00:21:04.721 "is_configured": true, 00:21:04.721 "data_offset": 0, 00:21:04.721 "data_size": 65536 00:21:04.721 }, 00:21:04.721 { 00:21:04.721 "name": "BaseBdev3", 00:21:04.721 "uuid": "af291139-c57d-4be7-94ce-0d825c9bf2ff", 00:21:04.721 "is_configured": true, 00:21:04.721 "data_offset": 0, 00:21:04.721 "data_size": 65536 00:21:04.721 }, 00:21:04.721 { 00:21:04.721 "name": "BaseBdev4", 00:21:04.721 "uuid": "159acbed-8c7f-430a-ac43-313410262b17", 00:21:04.721 "is_configured": true, 00:21:04.721 "data_offset": 0, 00:21:04.721 "data_size": 65536 00:21:04.721 } 00:21:04.721 ] 00:21:04.721 }' 00:21:04.721 13:05:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.721 13:05:08 -- common/autotest_common.sh@10 -- # set +x 00:21:05.684 13:05:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:05.942 [2024-04-17 13:05:09.836172] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:05.942 [2024-04-17 13:05:09.836390] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.942 [2024-04-17 13:05:09.836584] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.942 13:05:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.202 13:05:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.202 "name": "Existed_Raid", 00:21:06.202 "uuid": "96706ff6-519f-432c-af96-c2de3d235c6d", 00:21:06.202 "strip_size_kb": 64, 00:21:06.202 "state": "offline", 00:21:06.202 "raid_level": "concat", 00:21:06.202 "superblock": false, 00:21:06.202 "num_base_bdevs": 4, 00:21:06.202 "num_base_bdevs_discovered": 3, 00:21:06.202 "num_base_bdevs_operational": 3, 00:21:06.202 "base_bdevs_list": [ 00:21:06.202 { 00:21:06.202 "name": null, 00:21:06.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.202 "is_configured": false, 00:21:06.202 "data_offset": 0, 00:21:06.202 "data_size": 65536 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "name": "BaseBdev2", 00:21:06.202 "uuid": "2c75d1ae-6b95-48ea-b63d-668f7868cabc", 00:21:06.202 "is_configured": true, 00:21:06.202 "data_offset": 0, 00:21:06.202 "data_size": 65536 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "name": "BaseBdev3", 00:21:06.202 "uuid": "af291139-c57d-4be7-94ce-0d825c9bf2ff", 00:21:06.202 "is_configured": true, 00:21:06.202 "data_offset": 0, 00:21:06.202 "data_size": 65536 00:21:06.202 }, 00:21:06.202 { 00:21:06.202 "name": "BaseBdev4", 00:21:06.202 "uuid": "159acbed-8c7f-430a-ac43-313410262b17", 00:21:06.202 "is_configured": true, 00:21:06.202 "data_offset": 0, 00:21:06.202 "data_size": 65536 00:21:06.202 } 00:21:06.202 ] 00:21:06.202 }' 00:21:06.202 13:05:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.202 13:05:10 -- common/autotest_common.sh@10 -- # set +x 00:21:07.138 13:05:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:07.138 13:05:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:07.138 13:05:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.138 13:05:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:07.138 13:05:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:07.138 13:05:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:07.138 13:05:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:07.397 [2024-04-17 13:05:11.458952] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:07.656 13:05:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:08.223 [2024-04-17 13:05:12.083336] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:08.223 13:05:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:08.223 13:05:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:08.223 13:05:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.223 13:05:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:08.482 13:05:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:08.482 13:05:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:08.482 13:05:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:08.741 [2024-04-17 13:05:12.740606] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:08.741 [2024-04-17 13:05:12.740863] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:08.741 13:05:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:08.741 13:05:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:08.741 13:05:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.741 13:05:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:08.999 13:05:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:08.999 13:05:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:08.999 13:05:13 -- bdev/bdev_raid.sh@287 -- # killprocess 127482 00:21:08.999 13:05:13 -- common/autotest_common.sh@924 -- # '[' -z 127482 ']' 00:21:08.999 13:05:13 -- common/autotest_common.sh@928 -- # kill -0 127482 00:21:08.999 13:05:13 -- common/autotest_common.sh@929 -- # uname 00:21:08.999 13:05:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:08.999 13:05:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 127482 00:21:08.999 killing process with pid 127482 00:21:08.999 13:05:13 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:08.999 13:05:13 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:08.999 13:05:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 127482' 00:21:08.999 13:05:13 -- common/autotest_common.sh@943 -- # kill 127482 00:21:08.999 13:05:13 -- common/autotest_common.sh@948 -- # wait 127482 00:21:08.999 [2024-04-17 13:05:13.095243] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:08.999 [2024-04-17 13:05:13.095363] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.384 ************************************ 00:21:10.384 END TEST raid_state_function_test 00:21:10.384 ************************************ 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:10.384 00:21:10.384 real 0m15.818s 00:21:10.384 user 0m28.371s 00:21:10.384 sys 0m1.730s 00:21:10.384 13:05:14 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:21:10.384 13:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:10.384 13:05:14 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:21:10.384 13:05:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:10.384 13:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.384 ************************************ 00:21:10.384 START TEST raid_state_function_test_sb 00:21:10.384 ************************************ 00:21:10.384 13:05:14 -- common/autotest_common.sh@1099 -- # raid_state_function_test concat 4 true 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:10.384 13:05:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:10.384 Process raid pid: 127980 00:21:10.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=127980 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127980' 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127980 /var/tmp/spdk-raid.sock 00:21:10.385 13:05:14 -- common/autotest_common.sh@817 -- # '[' -z 127980 ']' 00:21:10.385 13:05:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:10.385 13:05:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:10.385 13:05:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.385 13:05:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:10.385 13:05:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.385 13:05:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.385 [2024-04-17 13:05:14.404958] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:21:10.385 [2024-04-17 13:05:14.405357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.643 [2024-04-17 13:05:14.582772] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.902 [2024-04-17 13:05:14.840391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.160 [2024-04-17 13:05:15.054181] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.419 13:05:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:11.419 13:05:15 -- common/autotest_common.sh@850 -- # return 0 00:21:11.419 13:05:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:11.677 [2024-04-17 13:05:15.593213] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.677 [2024-04-17 13:05:15.593537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.677 [2024-04-17 13:05:15.593646] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.677 [2024-04-17 13:05:15.593776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.677 [2024-04-17 13:05:15.593874] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.677 [2024-04-17 13:05:15.593956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.677 [2024-04-17 13:05:15.594155] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:11.677 [2024-04-17 13:05:15.594217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.677 13:05:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.678 13:05:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.936 13:05:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.936 "name": "Existed_Raid", 00:21:11.936 "uuid": "5e493160-1679-46d5-b426-a7d7a175f073", 00:21:11.936 "strip_size_kb": 64, 00:21:11.936 "state": "configuring", 00:21:11.936 "raid_level": "concat", 00:21:11.936 "superblock": true, 00:21:11.936 "num_base_bdevs": 4, 00:21:11.936 "num_base_bdevs_discovered": 0, 00:21:11.936 "num_base_bdevs_operational": 4, 00:21:11.936 "base_bdevs_list": [ 00:21:11.936 { 00:21:11.936 "name": "BaseBdev1", 00:21:11.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.936 "is_configured": false, 00:21:11.936 "data_offset": 0, 00:21:11.936 "data_size": 0 00:21:11.936 }, 00:21:11.936 { 00:21:11.936 "name": "BaseBdev2", 00:21:11.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.936 "is_configured": false, 00:21:11.936 "data_offset": 0, 00:21:11.936 "data_size": 0 00:21:11.936 }, 00:21:11.936 { 00:21:11.936 "name": "BaseBdev3", 00:21:11.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.936 "is_configured": false, 00:21:11.936 "data_offset": 0, 00:21:11.936 "data_size": 0 00:21:11.936 }, 00:21:11.936 { 00:21:11.936 "name": "BaseBdev4", 00:21:11.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.936 "is_configured": false, 00:21:11.936 "data_offset": 0, 00:21:11.936 "data_size": 0 00:21:11.936 } 00:21:11.936 ] 00:21:11.936 }' 00:21:11.936 13:05:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.936 13:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:12.504 13:05:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:12.763 [2024-04-17 13:05:16.753310] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.763 [2024-04-17 13:05:16.753599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:12.763 13:05:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:13.022 [2024-04-17 13:05:17.021411] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:13.022 [2024-04-17 13:05:17.021812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:13.022 [2024-04-17 13:05:17.021913] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.022 [2024-04-17 13:05:17.021977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.022 [2024-04-17 13:05:17.022066] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:13.022 [2024-04-17 13:05:17.022140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:13.022 [2024-04-17 13:05:17.022171] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:13.022 [2024-04-17 13:05:17.022327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:13.022 13:05:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:13.318 [2024-04-17 13:05:17.329731] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.318 BaseBdev1 00:21:13.318 13:05:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:13.318 13:05:17 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:13.318 13:05:17 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:13.318 13:05:17 -- common/autotest_common.sh@887 -- # local i 00:21:13.318 13:05:17 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:13.318 13:05:17 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:13.318 13:05:17 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:13.578 13:05:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:13.836 [ 00:21:13.836 { 00:21:13.836 "name": "BaseBdev1", 00:21:13.836 "aliases": [ 00:21:13.836 "1f5ca059-8494-461f-b72e-4a315764ae08" 00:21:13.836 ], 00:21:13.836 "product_name": "Malloc disk", 00:21:13.836 "block_size": 512, 00:21:13.836 "num_blocks": 65536, 00:21:13.836 "uuid": "1f5ca059-8494-461f-b72e-4a315764ae08", 00:21:13.836 "assigned_rate_limits": { 00:21:13.836 "rw_ios_per_sec": 0, 00:21:13.836 "rw_mbytes_per_sec": 0, 00:21:13.836 "r_mbytes_per_sec": 0, 00:21:13.836 "w_mbytes_per_sec": 0 00:21:13.836 }, 00:21:13.836 "claimed": true, 00:21:13.836 "claim_type": "exclusive_write", 00:21:13.836 "zoned": false, 00:21:13.836 "supported_io_types": { 00:21:13.836 "read": true, 00:21:13.836 "write": true, 00:21:13.836 "unmap": true, 00:21:13.836 "write_zeroes": true, 00:21:13.836 "flush": true, 00:21:13.836 "reset": true, 00:21:13.836 "compare": false, 00:21:13.836 "compare_and_write": false, 00:21:13.836 "abort": true, 00:21:13.836 "nvme_admin": false, 00:21:13.836 "nvme_io": false 00:21:13.836 }, 00:21:13.836 "memory_domains": [ 00:21:13.836 { 00:21:13.836 "dma_device_id": "system", 00:21:13.836 "dma_device_type": 1 00:21:13.836 }, 00:21:13.836 { 00:21:13.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.836 "dma_device_type": 2 00:21:13.836 } 00:21:13.836 ], 00:21:13.836 "driver_specific": {} 00:21:13.836 } 00:21:13.836 ] 00:21:13.836 13:05:17 -- common/autotest_common.sh@893 -- # return 0 00:21:13.836 13:05:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.837 13:05:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.095 13:05:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.095 "name": "Existed_Raid", 00:21:14.095 "uuid": "5d2c8b43-3217-4f4f-bdc8-af3d17c5a8ba", 00:21:14.095 "strip_size_kb": 64, 00:21:14.095 "state": "configuring", 00:21:14.095 "raid_level": "concat", 00:21:14.095 "superblock": true, 00:21:14.095 "num_base_bdevs": 4, 00:21:14.095 "num_base_bdevs_discovered": 1, 00:21:14.095 "num_base_bdevs_operational": 4, 00:21:14.095 "base_bdevs_list": [ 00:21:14.095 { 00:21:14.095 "name": "BaseBdev1", 00:21:14.095 "uuid": "1f5ca059-8494-461f-b72e-4a315764ae08", 00:21:14.095 "is_configured": true, 00:21:14.095 "data_offset": 2048, 00:21:14.095 "data_size": 63488 00:21:14.095 }, 00:21:14.095 { 00:21:14.095 "name": "BaseBdev2", 00:21:14.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.095 "is_configured": false, 00:21:14.095 "data_offset": 0, 00:21:14.095 "data_size": 0 00:21:14.095 }, 00:21:14.095 { 00:21:14.095 "name": "BaseBdev3", 00:21:14.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.095 "is_configured": false, 00:21:14.095 "data_offset": 0, 00:21:14.095 "data_size": 0 00:21:14.095 }, 00:21:14.095 { 00:21:14.095 "name": "BaseBdev4", 00:21:14.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.095 "is_configured": false, 00:21:14.095 "data_offset": 0, 00:21:14.095 "data_size": 0 00:21:14.095 } 00:21:14.095 ] 00:21:14.095 }' 00:21:14.095 13:05:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.095 13:05:18 -- common/autotest_common.sh@10 -- # set +x 00:21:15.032 13:05:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:15.032 [2024-04-17 13:05:19.122315] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.032 [2024-04-17 13:05:19.122632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:15.032 13:05:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:15.032 13:05:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:15.600 13:05:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:15.859 BaseBdev1 00:21:15.859 13:05:19 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:15.859 13:05:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:15.859 13:05:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:15.859 13:05:19 -- common/autotest_common.sh@887 -- # local i 00:21:15.859 13:05:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:15.859 13:05:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:15.859 13:05:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:16.118 13:05:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:16.377 [ 00:21:16.377 { 00:21:16.377 "name": "BaseBdev1", 00:21:16.377 "aliases": [ 00:21:16.377 "ed937b60-b770-4d94-9011-333f747ad030" 00:21:16.377 ], 00:21:16.377 "product_name": "Malloc disk", 00:21:16.377 "block_size": 512, 00:21:16.377 "num_blocks": 65536, 00:21:16.377 "uuid": "ed937b60-b770-4d94-9011-333f747ad030", 00:21:16.377 "assigned_rate_limits": { 00:21:16.377 "rw_ios_per_sec": 0, 00:21:16.377 "rw_mbytes_per_sec": 0, 00:21:16.377 "r_mbytes_per_sec": 0, 00:21:16.377 "w_mbytes_per_sec": 0 00:21:16.377 }, 00:21:16.377 "claimed": false, 00:21:16.377 "zoned": false, 00:21:16.377 "supported_io_types": { 00:21:16.377 "read": true, 00:21:16.377 "write": true, 00:21:16.377 "unmap": true, 00:21:16.377 "write_zeroes": true, 00:21:16.377 "flush": true, 00:21:16.377 "reset": true, 00:21:16.377 "compare": false, 00:21:16.377 "compare_and_write": false, 00:21:16.377 "abort": true, 00:21:16.377 "nvme_admin": false, 00:21:16.377 "nvme_io": false 00:21:16.377 }, 00:21:16.377 "memory_domains": [ 00:21:16.377 { 00:21:16.377 "dma_device_id": "system", 00:21:16.377 "dma_device_type": 1 00:21:16.377 }, 00:21:16.377 { 00:21:16.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.377 "dma_device_type": 2 00:21:16.377 } 00:21:16.377 ], 00:21:16.377 "driver_specific": {} 00:21:16.377 } 00:21:16.377 ] 00:21:16.377 13:05:20 -- common/autotest_common.sh@893 -- # return 0 00:21:16.377 13:05:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:16.636 [2024-04-17 13:05:20.548706] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.636 [2024-04-17 13:05:20.551426] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:16.636 [2024-04-17 13:05:20.551673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:16.636 [2024-04-17 13:05:20.551783] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:16.636 [2024-04-17 13:05:20.551869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:16.636 [2024-04-17 13:05:20.551985] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:16.636 [2024-04-17 13:05:20.552054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.636 13:05:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.894 13:05:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.894 "name": "Existed_Raid", 00:21:16.894 "uuid": "30753b34-b539-46cd-9ad6-cc052cd093ca", 00:21:16.894 "strip_size_kb": 64, 00:21:16.894 "state": "configuring", 00:21:16.894 "raid_level": "concat", 00:21:16.894 "superblock": true, 00:21:16.894 "num_base_bdevs": 4, 00:21:16.894 "num_base_bdevs_discovered": 1, 00:21:16.894 "num_base_bdevs_operational": 4, 00:21:16.894 "base_bdevs_list": [ 00:21:16.894 { 00:21:16.894 "name": "BaseBdev1", 00:21:16.894 "uuid": "ed937b60-b770-4d94-9011-333f747ad030", 00:21:16.894 "is_configured": true, 00:21:16.894 "data_offset": 2048, 00:21:16.894 "data_size": 63488 00:21:16.894 }, 00:21:16.894 { 00:21:16.894 "name": "BaseBdev2", 00:21:16.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.894 "is_configured": false, 00:21:16.894 "data_offset": 0, 00:21:16.894 "data_size": 0 00:21:16.894 }, 00:21:16.894 { 00:21:16.894 "name": "BaseBdev3", 00:21:16.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.894 "is_configured": false, 00:21:16.894 "data_offset": 0, 00:21:16.894 "data_size": 0 00:21:16.894 }, 00:21:16.894 { 00:21:16.894 "name": "BaseBdev4", 00:21:16.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.894 "is_configured": false, 00:21:16.894 "data_offset": 0, 00:21:16.894 "data_size": 0 00:21:16.894 } 00:21:16.894 ] 00:21:16.894 }' 00:21:16.894 13:05:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.894 13:05:20 -- common/autotest_common.sh@10 -- # set +x 00:21:17.468 13:05:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:17.727 [2024-04-17 13:05:21.868537] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.727 BaseBdev2 00:21:17.987 13:05:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:17.987 13:05:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:21:17.987 13:05:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:17.987 13:05:21 -- common/autotest_common.sh@887 -- # local i 00:21:17.987 13:05:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:17.987 13:05:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:17.987 13:05:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.246 13:05:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:18.505 [ 00:21:18.505 { 00:21:18.505 "name": "BaseBdev2", 00:21:18.505 "aliases": [ 00:21:18.505 "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef" 00:21:18.505 ], 00:21:18.505 "product_name": "Malloc disk", 00:21:18.505 "block_size": 512, 00:21:18.505 "num_blocks": 65536, 00:21:18.505 "uuid": "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef", 00:21:18.505 "assigned_rate_limits": { 00:21:18.505 "rw_ios_per_sec": 0, 00:21:18.505 "rw_mbytes_per_sec": 0, 00:21:18.505 "r_mbytes_per_sec": 0, 00:21:18.505 "w_mbytes_per_sec": 0 00:21:18.505 }, 00:21:18.505 "claimed": true, 00:21:18.505 "claim_type": "exclusive_write", 00:21:18.505 "zoned": false, 00:21:18.505 "supported_io_types": { 00:21:18.505 "read": true, 00:21:18.505 "write": true, 00:21:18.505 "unmap": true, 00:21:18.505 "write_zeroes": true, 00:21:18.505 "flush": true, 00:21:18.505 "reset": true, 00:21:18.505 "compare": false, 00:21:18.505 "compare_and_write": false, 00:21:18.505 "abort": true, 00:21:18.505 "nvme_admin": false, 00:21:18.505 "nvme_io": false 00:21:18.505 }, 00:21:18.505 "memory_domains": [ 00:21:18.505 { 00:21:18.505 "dma_device_id": "system", 00:21:18.505 "dma_device_type": 1 00:21:18.505 }, 00:21:18.505 { 00:21:18.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.505 "dma_device_type": 2 00:21:18.505 } 00:21:18.505 ], 00:21:18.505 "driver_specific": {} 00:21:18.505 } 00:21:18.505 ] 00:21:18.505 13:05:22 -- common/autotest_common.sh@893 -- # return 0 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.505 13:05:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.765 13:05:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.765 "name": "Existed_Raid", 00:21:18.765 "uuid": "30753b34-b539-46cd-9ad6-cc052cd093ca", 00:21:18.765 "strip_size_kb": 64, 00:21:18.765 "state": "configuring", 00:21:18.765 "raid_level": "concat", 00:21:18.765 "superblock": true, 00:21:18.765 "num_base_bdevs": 4, 00:21:18.765 "num_base_bdevs_discovered": 2, 00:21:18.765 "num_base_bdevs_operational": 4, 00:21:18.765 "base_bdevs_list": [ 00:21:18.765 { 00:21:18.765 "name": "BaseBdev1", 00:21:18.765 "uuid": "ed937b60-b770-4d94-9011-333f747ad030", 00:21:18.765 "is_configured": true, 00:21:18.765 "data_offset": 2048, 00:21:18.765 "data_size": 63488 00:21:18.765 }, 00:21:18.765 { 00:21:18.765 "name": "BaseBdev2", 00:21:18.765 "uuid": "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef", 00:21:18.765 "is_configured": true, 00:21:18.765 "data_offset": 2048, 00:21:18.765 "data_size": 63488 00:21:18.765 }, 00:21:18.765 { 00:21:18.765 "name": "BaseBdev3", 00:21:18.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.765 "is_configured": false, 00:21:18.765 "data_offset": 0, 00:21:18.765 "data_size": 0 00:21:18.765 }, 00:21:18.765 { 00:21:18.765 "name": "BaseBdev4", 00:21:18.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.765 "is_configured": false, 00:21:18.765 "data_offset": 0, 00:21:18.765 "data_size": 0 00:21:18.765 } 00:21:18.765 ] 00:21:18.765 }' 00:21:18.765 13:05:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.765 13:05:22 -- common/autotest_common.sh@10 -- # set +x 00:21:19.333 13:05:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:19.900 [2024-04-17 13:05:23.762845] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:19.900 BaseBdev3 00:21:19.900 13:05:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:19.900 13:05:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:21:19.900 13:05:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:19.900 13:05:23 -- common/autotest_common.sh@887 -- # local i 00:21:19.900 13:05:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:19.900 13:05:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:19.900 13:05:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.158 13:05:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:20.158 [ 00:21:20.158 { 00:21:20.158 "name": "BaseBdev3", 00:21:20.158 "aliases": [ 00:21:20.158 "a2c855ef-2cf3-42b6-beb9-ad1580af240c" 00:21:20.158 ], 00:21:20.158 "product_name": "Malloc disk", 00:21:20.158 "block_size": 512, 00:21:20.158 "num_blocks": 65536, 00:21:20.158 "uuid": "a2c855ef-2cf3-42b6-beb9-ad1580af240c", 00:21:20.158 "assigned_rate_limits": { 00:21:20.158 "rw_ios_per_sec": 0, 00:21:20.158 "rw_mbytes_per_sec": 0, 00:21:20.158 "r_mbytes_per_sec": 0, 00:21:20.158 "w_mbytes_per_sec": 0 00:21:20.158 }, 00:21:20.158 "claimed": true, 00:21:20.158 "claim_type": "exclusive_write", 00:21:20.158 "zoned": false, 00:21:20.158 "supported_io_types": { 00:21:20.158 "read": true, 00:21:20.158 "write": true, 00:21:20.158 "unmap": true, 00:21:20.158 "write_zeroes": true, 00:21:20.158 "flush": true, 00:21:20.158 "reset": true, 00:21:20.158 "compare": false, 00:21:20.158 "compare_and_write": false, 00:21:20.158 "abort": true, 00:21:20.158 "nvme_admin": false, 00:21:20.158 "nvme_io": false 00:21:20.158 }, 00:21:20.158 "memory_domains": [ 00:21:20.158 { 00:21:20.158 "dma_device_id": "system", 00:21:20.158 "dma_device_type": 1 00:21:20.158 }, 00:21:20.158 { 00:21:20.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.158 "dma_device_type": 2 00:21:20.158 } 00:21:20.158 ], 00:21:20.158 "driver_specific": {} 00:21:20.158 } 00:21:20.158 ] 00:21:20.158 13:05:24 -- common/autotest_common.sh@893 -- # return 0 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.158 13:05:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.417 13:05:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.417 "name": "Existed_Raid", 00:21:20.417 "uuid": "30753b34-b539-46cd-9ad6-cc052cd093ca", 00:21:20.417 "strip_size_kb": 64, 00:21:20.417 "state": "configuring", 00:21:20.417 "raid_level": "concat", 00:21:20.417 "superblock": true, 00:21:20.417 "num_base_bdevs": 4, 00:21:20.417 "num_base_bdevs_discovered": 3, 00:21:20.417 "num_base_bdevs_operational": 4, 00:21:20.417 "base_bdevs_list": [ 00:21:20.417 { 00:21:20.417 "name": "BaseBdev1", 00:21:20.417 "uuid": "ed937b60-b770-4d94-9011-333f747ad030", 00:21:20.417 "is_configured": true, 00:21:20.417 "data_offset": 2048, 00:21:20.417 "data_size": 63488 00:21:20.417 }, 00:21:20.417 { 00:21:20.417 "name": "BaseBdev2", 00:21:20.417 "uuid": "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef", 00:21:20.417 "is_configured": true, 00:21:20.417 "data_offset": 2048, 00:21:20.417 "data_size": 63488 00:21:20.417 }, 00:21:20.417 { 00:21:20.417 "name": "BaseBdev3", 00:21:20.417 "uuid": "a2c855ef-2cf3-42b6-beb9-ad1580af240c", 00:21:20.417 "is_configured": true, 00:21:20.417 "data_offset": 2048, 00:21:20.417 "data_size": 63488 00:21:20.417 }, 00:21:20.417 { 00:21:20.417 "name": "BaseBdev4", 00:21:20.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.417 "is_configured": false, 00:21:20.417 "data_offset": 0, 00:21:20.417 "data_size": 0 00:21:20.417 } 00:21:20.417 ] 00:21:20.417 }' 00:21:20.417 13:05:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.417 13:05:24 -- common/autotest_common.sh@10 -- # set +x 00:21:21.353 13:05:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:21.353 [2024-04-17 13:05:25.483886] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:21.353 [2024-04-17 13:05:25.484401] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:21:21.353 [2024-04-17 13:05:25.484530] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:21.353 BaseBdev4 00:21:21.353 [2024-04-17 13:05:25.484706] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:21.353 [2024-04-17 13:05:25.485208] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:21:21.353 [2024-04-17 13:05:25.485330] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:21:21.353 [2024-04-17 13:05:25.485589] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.353 13:05:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:21.353 13:05:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:21:21.353 13:05:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:21.353 13:05:25 -- common/autotest_common.sh@887 -- # local i 00:21:21.353 13:05:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:21.353 13:05:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:21.353 13:05:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:21.921 13:05:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:21.921 [ 00:21:21.921 { 00:21:21.921 "name": "BaseBdev4", 00:21:21.921 "aliases": [ 00:21:21.921 "8cb82e22-8f9e-4161-88ba-012db7bd6eb8" 00:21:21.921 ], 00:21:21.921 "product_name": "Malloc disk", 00:21:21.921 "block_size": 512, 00:21:21.921 "num_blocks": 65536, 00:21:21.921 "uuid": "8cb82e22-8f9e-4161-88ba-012db7bd6eb8", 00:21:21.921 "assigned_rate_limits": { 00:21:21.921 "rw_ios_per_sec": 0, 00:21:21.921 "rw_mbytes_per_sec": 0, 00:21:21.921 "r_mbytes_per_sec": 0, 00:21:21.921 "w_mbytes_per_sec": 0 00:21:21.921 }, 00:21:21.921 "claimed": true, 00:21:21.921 "claim_type": "exclusive_write", 00:21:21.921 "zoned": false, 00:21:21.921 "supported_io_types": { 00:21:21.921 "read": true, 00:21:21.921 "write": true, 00:21:21.921 "unmap": true, 00:21:21.921 "write_zeroes": true, 00:21:21.921 "flush": true, 00:21:21.921 "reset": true, 00:21:21.921 "compare": false, 00:21:21.921 "compare_and_write": false, 00:21:21.921 "abort": true, 00:21:21.921 "nvme_admin": false, 00:21:21.921 "nvme_io": false 00:21:21.921 }, 00:21:21.921 "memory_domains": [ 00:21:21.921 { 00:21:21.921 "dma_device_id": "system", 00:21:21.921 "dma_device_type": 1 00:21:21.921 }, 00:21:21.921 { 00:21:21.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.921 "dma_device_type": 2 00:21:21.921 } 00:21:21.921 ], 00:21:21.921 "driver_specific": {} 00:21:21.921 } 00:21:21.921 ] 00:21:21.921 13:05:26 -- common/autotest_common.sh@893 -- # return 0 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.921 13:05:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:22.180 13:05:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.180 "name": "Existed_Raid", 00:21:22.180 "uuid": "30753b34-b539-46cd-9ad6-cc052cd093ca", 00:21:22.180 "strip_size_kb": 64, 00:21:22.180 "state": "online", 00:21:22.180 "raid_level": "concat", 00:21:22.180 "superblock": true, 00:21:22.180 "num_base_bdevs": 4, 00:21:22.180 "num_base_bdevs_discovered": 4, 00:21:22.180 "num_base_bdevs_operational": 4, 00:21:22.180 "base_bdevs_list": [ 00:21:22.180 { 00:21:22.180 "name": "BaseBdev1", 00:21:22.180 "uuid": "ed937b60-b770-4d94-9011-333f747ad030", 00:21:22.180 "is_configured": true, 00:21:22.180 "data_offset": 2048, 00:21:22.180 "data_size": 63488 00:21:22.180 }, 00:21:22.180 { 00:21:22.180 "name": "BaseBdev2", 00:21:22.180 "uuid": "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef", 00:21:22.180 "is_configured": true, 00:21:22.180 "data_offset": 2048, 00:21:22.180 "data_size": 63488 00:21:22.180 }, 00:21:22.180 { 00:21:22.180 "name": "BaseBdev3", 00:21:22.180 "uuid": "a2c855ef-2cf3-42b6-beb9-ad1580af240c", 00:21:22.180 "is_configured": true, 00:21:22.180 "data_offset": 2048, 00:21:22.180 "data_size": 63488 00:21:22.180 }, 00:21:22.180 { 00:21:22.180 "name": "BaseBdev4", 00:21:22.180 "uuid": "8cb82e22-8f9e-4161-88ba-012db7bd6eb8", 00:21:22.180 "is_configured": true, 00:21:22.180 "data_offset": 2048, 00:21:22.180 "data_size": 63488 00:21:22.180 } 00:21:22.180 ] 00:21:22.180 }' 00:21:22.180 13:05:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.180 13:05:26 -- common/autotest_common.sh@10 -- # set +x 00:21:23.116 13:05:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:23.116 [2024-04-17 13:05:27.212551] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:23.116 [2024-04-17 13:05:27.212790] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.116 [2024-04-17 13:05:27.212958] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.375 13:05:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.634 13:05:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.634 "name": "Existed_Raid", 00:21:23.634 "uuid": "30753b34-b539-46cd-9ad6-cc052cd093ca", 00:21:23.634 "strip_size_kb": 64, 00:21:23.634 "state": "offline", 00:21:23.634 "raid_level": "concat", 00:21:23.634 "superblock": true, 00:21:23.634 "num_base_bdevs": 4, 00:21:23.634 "num_base_bdevs_discovered": 3, 00:21:23.634 "num_base_bdevs_operational": 3, 00:21:23.634 "base_bdevs_list": [ 00:21:23.634 { 00:21:23.634 "name": null, 00:21:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.635 "is_configured": false, 00:21:23.635 "data_offset": 2048, 00:21:23.635 "data_size": 63488 00:21:23.635 }, 00:21:23.635 { 00:21:23.635 "name": "BaseBdev2", 00:21:23.635 "uuid": "4dc923e6-0c0a-40ee-a7e6-c1da50bcbbef", 00:21:23.635 "is_configured": true, 00:21:23.635 "data_offset": 2048, 00:21:23.635 "data_size": 63488 00:21:23.635 }, 00:21:23.635 { 00:21:23.635 "name": "BaseBdev3", 00:21:23.635 "uuid": "a2c855ef-2cf3-42b6-beb9-ad1580af240c", 00:21:23.635 "is_configured": true, 00:21:23.635 "data_offset": 2048, 00:21:23.635 "data_size": 63488 00:21:23.635 }, 00:21:23.635 { 00:21:23.635 "name": "BaseBdev4", 00:21:23.635 "uuid": "8cb82e22-8f9e-4161-88ba-012db7bd6eb8", 00:21:23.635 "is_configured": true, 00:21:23.635 "data_offset": 2048, 00:21:23.635 "data_size": 63488 00:21:23.635 } 00:21:23.635 ] 00:21:23.635 }' 00:21:23.635 13:05:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.635 13:05:27 -- common/autotest_common.sh@10 -- # set +x 00:21:24.275 13:05:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:24.275 13:05:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:24.275 13:05:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.275 13:05:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:24.533 13:05:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:24.533 13:05:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:24.533 13:05:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:24.792 [2024-04-17 13:05:28.785972] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:24.792 13:05:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:24.792 13:05:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:24.792 13:05:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.792 13:05:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:25.051 13:05:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:25.051 13:05:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:25.051 13:05:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:25.310 [2024-04-17 13:05:29.408551] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:25.569 13:05:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:25.569 13:05:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:25.569 13:05:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.569 13:05:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:25.828 13:05:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:25.828 13:05:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:25.828 13:05:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:25.828 [2024-04-17 13:05:29.970980] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:25.828 [2024-04-17 13:05:29.971292] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:21:26.086 13:05:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:26.086 13:05:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:26.086 13:05:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.086 13:05:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:26.345 13:05:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:26.345 13:05:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:26.345 13:05:30 -- bdev/bdev_raid.sh@287 -- # killprocess 127980 00:21:26.345 13:05:30 -- common/autotest_common.sh@924 -- # '[' -z 127980 ']' 00:21:26.345 13:05:30 -- common/autotest_common.sh@928 -- # kill -0 127980 00:21:26.345 13:05:30 -- common/autotest_common.sh@929 -- # uname 00:21:26.345 13:05:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:26.345 13:05:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 127980 00:21:26.345 killing process with pid 127980 00:21:26.345 13:05:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:26.345 13:05:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:26.345 13:05:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 127980' 00:21:26.345 13:05:30 -- common/autotest_common.sh@943 -- # kill 127980 00:21:26.345 13:05:30 -- common/autotest_common.sh@948 -- # wait 127980 00:21:26.345 [2024-04-17 13:05:30.352007] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:26.345 [2024-04-17 13:05:30.352127] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.723 ************************************ 00:21:27.723 END TEST raid_state_function_test_sb 00:21:27.723 ************************************ 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:27.723 00:21:27.723 real 0m17.116s 00:21:27.723 user 0m30.877s 00:21:27.723 sys 0m1.907s 00:21:27.723 13:05:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:21:27.723 13:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:27.723 13:05:31 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:21:27.723 13:05:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:27.723 13:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:27.723 ************************************ 00:21:27.723 START TEST raid_superblock_test 00:21:27.723 ************************************ 00:21:27.723 13:05:31 -- common/autotest_common.sh@1099 -- # raid_superblock_test concat 4 00:21:27.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=128488 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128488 /var/tmp/spdk-raid.sock 00:21:27.723 13:05:31 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:27.723 13:05:31 -- common/autotest_common.sh@817 -- # '[' -z 128488 ']' 00:21:27.723 13:05:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:27.723 13:05:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.723 13:05:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:27.723 13:05:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.723 13:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:27.723 [2024-04-17 13:05:31.590470] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:21:27.723 [2024-04-17 13:05:31.590915] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128488 ] 00:21:27.723 [2024-04-17 13:05:31.758913] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.983 [2024-04-17 13:05:31.954274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.242 [2024-04-17 13:05:32.145897] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.499 13:05:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.499 13:05:32 -- common/autotest_common.sh@850 -- # return 0 00:21:28.499 13:05:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:28.499 13:05:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:28.499 13:05:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:28.500 13:05:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:28.759 malloc1 00:21:28.759 13:05:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:29.018 [2024-04-17 13:05:33.036515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:29.018 [2024-04-17 13:05:33.036849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.018 [2024-04-17 13:05:33.037001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:29.018 [2024-04-17 13:05:33.037141] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.018 [2024-04-17 13:05:33.039798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.018 [2024-04-17 13:05:33.039979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:29.018 pt1 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:29.018 13:05:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:29.276 malloc2 00:21:29.276 13:05:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:29.536 [2024-04-17 13:05:33.579362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:29.536 [2024-04-17 13:05:33.579620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.536 [2024-04-17 13:05:33.579708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:29.536 [2024-04-17 13:05:33.579950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.536 [2024-04-17 13:05:33.582616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.536 [2024-04-17 13:05:33.582778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:29.536 pt2 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:29.536 13:05:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:29.794 malloc3 00:21:29.794 13:05:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:30.053 [2024-04-17 13:05:34.098210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:30.053 [2024-04-17 13:05:34.098596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.053 [2024-04-17 13:05:34.098750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:30.053 [2024-04-17 13:05:34.098892] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.053 [2024-04-17 13:05:34.101530] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.053 [2024-04-17 13:05:34.101723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:30.053 pt3 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.053 13:05:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:30.311 malloc4 00:21:30.311 13:05:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:30.569 [2024-04-17 13:05:34.630930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:30.569 [2024-04-17 13:05:34.631186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.569 [2024-04-17 13:05:34.631385] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:30.569 [2024-04-17 13:05:34.631591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.569 [2024-04-17 13:05:34.634027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.569 [2024-04-17 13:05:34.634224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:30.569 pt4 00:21:30.569 13:05:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:30.569 13:05:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:30.569 13:05:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:30.828 [2024-04-17 13:05:34.859056] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:30.828 [2024-04-17 13:05:34.861591] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:30.828 [2024-04-17 13:05:34.861840] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:30.828 [2024-04-17 13:05:34.861979] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:30.828 [2024-04-17 13:05:34.862283] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:30.828 [2024-04-17 13:05:34.862410] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:30.828 [2024-04-17 13:05:34.862702] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:30.828 [2024-04-17 13:05:34.863244] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:30.828 [2024-04-17 13:05:34.863442] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:30.828 [2024-04-17 13:05:34.863775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.828 13:05:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.087 13:05:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.087 "name": "raid_bdev1", 00:21:31.087 "uuid": "5e55a851-6a70-48f6-8d3e-436fbe5efb08", 00:21:31.087 "strip_size_kb": 64, 00:21:31.087 "state": "online", 00:21:31.087 "raid_level": "concat", 00:21:31.087 "superblock": true, 00:21:31.087 "num_base_bdevs": 4, 00:21:31.087 "num_base_bdevs_discovered": 4, 00:21:31.087 "num_base_bdevs_operational": 4, 00:21:31.087 "base_bdevs_list": [ 00:21:31.087 { 00:21:31.087 "name": "pt1", 00:21:31.087 "uuid": "7a9f2e75-9bf2-5843-83ef-65b73816993c", 00:21:31.087 "is_configured": true, 00:21:31.087 "data_offset": 2048, 00:21:31.087 "data_size": 63488 00:21:31.087 }, 00:21:31.087 { 00:21:31.087 "name": "pt2", 00:21:31.087 "uuid": "b7a98629-100b-50dc-822c-4ce6e20efa9b", 00:21:31.087 "is_configured": true, 00:21:31.087 "data_offset": 2048, 00:21:31.087 "data_size": 63488 00:21:31.087 }, 00:21:31.087 { 00:21:31.087 "name": "pt3", 00:21:31.087 "uuid": "b21173b5-09eb-557a-868d-3ceaa394c58f", 00:21:31.087 "is_configured": true, 00:21:31.087 "data_offset": 2048, 00:21:31.087 "data_size": 63488 00:21:31.087 }, 00:21:31.087 { 00:21:31.087 "name": "pt4", 00:21:31.087 "uuid": "7ea4f02e-21d7-564d-b48d-35bea1b16b92", 00:21:31.087 "is_configured": true, 00:21:31.087 "data_offset": 2048, 00:21:31.087 "data_size": 63488 00:21:31.087 } 00:21:31.087 ] 00:21:31.087 }' 00:21:31.087 13:05:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.087 13:05:35 -- common/autotest_common.sh@10 -- # set +x 00:21:32.023 13:05:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:32.023 13:05:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:32.023 [2024-04-17 13:05:36.109692] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.023 13:05:36 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5e55a851-6a70-48f6-8d3e-436fbe5efb08 00:21:32.023 13:05:36 -- bdev/bdev_raid.sh@380 -- # '[' -z 5e55a851-6a70-48f6-8d3e-436fbe5efb08 ']' 00:21:32.023 13:05:36 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:32.281 [2024-04-17 13:05:36.421404] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.282 [2024-04-17 13:05:36.421656] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.282 [2024-04-17 13:05:36.421862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.282 [2024-04-17 13:05:36.422039] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.282 [2024-04-17 13:05:36.422193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:32.540 13:05:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:33.107 13:05:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.107 13:05:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:33.107 13:05:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.107 13:05:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:33.674 13:05:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:33.674 13:05:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:33.674 13:05:37 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:33.674 13:05:37 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:34.241 13:05:38 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:34.241 13:05:38 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:34.241 13:05:38 -- common/autotest_common.sh@638 -- # local es=0 00:21:34.241 13:05:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:34.241 13:05:38 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.241 13:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.241 13:05:38 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.241 13:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.241 13:05:38 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.241 13:05:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:34.241 13:05:38 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.241 13:05:38 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:34.241 13:05:38 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:34.241 [2024-04-17 13:05:38.357737] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:34.241 [2024-04-17 13:05:38.360166] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:34.241 [2024-04-17 13:05:38.360386] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:34.241 [2024-04-17 13:05:38.360551] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:34.241 [2024-04-17 13:05:38.360709] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:34.241 [2024-04-17 13:05:38.360901] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:34.241 [2024-04-17 13:05:38.361054] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:34.241 [2024-04-17 13:05:38.361215] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:34.241 [2024-04-17 13:05:38.361338] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.241 [2024-04-17 13:05:38.361440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:21:34.241 request: 00:21:34.241 { 00:21:34.241 "name": "raid_bdev1", 00:21:34.241 "raid_level": "concat", 00:21:34.241 "base_bdevs": [ 00:21:34.241 "malloc1", 00:21:34.241 "malloc2", 00:21:34.241 "malloc3", 00:21:34.241 "malloc4" 00:21:34.241 ], 00:21:34.241 "superblock": false, 00:21:34.241 "strip_size_kb": 64, 00:21:34.241 "method": "bdev_raid_create", 00:21:34.241 "req_id": 1 00:21:34.241 } 00:21:34.241 Got JSON-RPC error response 00:21:34.241 response: 00:21:34.241 { 00:21:34.241 "code": -17, 00:21:34.241 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:34.241 } 00:21:34.241 13:05:38 -- common/autotest_common.sh@641 -- # es=1 00:21:34.241 13:05:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:34.241 13:05:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:34.241 13:05:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:34.241 13:05:38 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.241 13:05:38 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:34.500 13:05:38 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:34.500 13:05:38 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:34.500 13:05:38 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:34.758 [2024-04-17 13:05:38.809840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:34.758 [2024-04-17 13:05:38.810154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.758 [2024-04-17 13:05:38.810303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:34.758 [2024-04-17 13:05:38.810457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.758 [2024-04-17 13:05:38.813349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.758 [2024-04-17 13:05:38.813645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:34.758 [2024-04-17 13:05:38.813937] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:34.758 [2024-04-17 13:05:38.814111] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:34.758 pt1 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.758 13:05:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.017 13:05:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.017 "name": "raid_bdev1", 00:21:35.017 "uuid": "5e55a851-6a70-48f6-8d3e-436fbe5efb08", 00:21:35.017 "strip_size_kb": 64, 00:21:35.017 "state": "configuring", 00:21:35.017 "raid_level": "concat", 00:21:35.017 "superblock": true, 00:21:35.017 "num_base_bdevs": 4, 00:21:35.017 "num_base_bdevs_discovered": 1, 00:21:35.017 "num_base_bdevs_operational": 4, 00:21:35.017 "base_bdevs_list": [ 00:21:35.017 { 00:21:35.017 "name": "pt1", 00:21:35.017 "uuid": "7a9f2e75-9bf2-5843-83ef-65b73816993c", 00:21:35.017 "is_configured": true, 00:21:35.017 "data_offset": 2048, 00:21:35.017 "data_size": 63488 00:21:35.017 }, 00:21:35.017 { 00:21:35.017 "name": null, 00:21:35.017 "uuid": "b7a98629-100b-50dc-822c-4ce6e20efa9b", 00:21:35.017 "is_configured": false, 00:21:35.017 "data_offset": 2048, 00:21:35.017 "data_size": 63488 00:21:35.017 }, 00:21:35.017 { 00:21:35.017 "name": null, 00:21:35.017 "uuid": "b21173b5-09eb-557a-868d-3ceaa394c58f", 00:21:35.017 "is_configured": false, 00:21:35.017 "data_offset": 2048, 00:21:35.017 "data_size": 63488 00:21:35.017 }, 00:21:35.017 { 00:21:35.017 "name": null, 00:21:35.017 "uuid": "7ea4f02e-21d7-564d-b48d-35bea1b16b92", 00:21:35.017 "is_configured": false, 00:21:35.017 "data_offset": 2048, 00:21:35.017 "data_size": 63488 00:21:35.017 } 00:21:35.017 ] 00:21:35.017 }' 00:21:35.017 13:05:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.017 13:05:39 -- common/autotest_common.sh@10 -- # set +x 00:21:35.950 13:05:39 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:35.950 13:05:39 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:35.950 [2024-04-17 13:05:39.978564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:35.950 [2024-04-17 13:05:39.978933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.950 [2024-04-17 13:05:39.979030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:35.950 [2024-04-17 13:05:39.979252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.950 [2024-04-17 13:05:39.979997] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.950 [2024-04-17 13:05:39.980203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:35.950 [2024-04-17 13:05:39.980443] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:35.950 [2024-04-17 13:05:39.980571] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:35.950 pt2 00:21:35.950 13:05:39 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:36.209 [2024-04-17 13:05:40.238606] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.210 13:05:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.468 13:05:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:36.468 "name": "raid_bdev1", 00:21:36.468 "uuid": "5e55a851-6a70-48f6-8d3e-436fbe5efb08", 00:21:36.468 "strip_size_kb": 64, 00:21:36.468 "state": "configuring", 00:21:36.468 "raid_level": "concat", 00:21:36.468 "superblock": true, 00:21:36.468 "num_base_bdevs": 4, 00:21:36.468 "num_base_bdevs_discovered": 1, 00:21:36.468 "num_base_bdevs_operational": 4, 00:21:36.468 "base_bdevs_list": [ 00:21:36.468 { 00:21:36.468 "name": "pt1", 00:21:36.468 "uuid": "7a9f2e75-9bf2-5843-83ef-65b73816993c", 00:21:36.468 "is_configured": true, 00:21:36.468 "data_offset": 2048, 00:21:36.468 "data_size": 63488 00:21:36.468 }, 00:21:36.468 { 00:21:36.468 "name": null, 00:21:36.468 "uuid": "b7a98629-100b-50dc-822c-4ce6e20efa9b", 00:21:36.468 "is_configured": false, 00:21:36.468 "data_offset": 2048, 00:21:36.468 "data_size": 63488 00:21:36.468 }, 00:21:36.468 { 00:21:36.468 "name": null, 00:21:36.468 "uuid": "b21173b5-09eb-557a-868d-3ceaa394c58f", 00:21:36.468 "is_configured": false, 00:21:36.468 "data_offset": 2048, 00:21:36.468 "data_size": 63488 00:21:36.468 }, 00:21:36.468 { 00:21:36.468 "name": null, 00:21:36.468 "uuid": "7ea4f02e-21d7-564d-b48d-35bea1b16b92", 00:21:36.468 "is_configured": false, 00:21:36.468 "data_offset": 2048, 00:21:36.468 "data_size": 63488 00:21:36.468 } 00:21:36.468 ] 00:21:36.468 }' 00:21:36.468 13:05:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:36.468 13:05:40 -- common/autotest_common.sh@10 -- # set +x 00:21:37.035 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:37.035 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:37.035 13:05:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:37.293 [2024-04-17 13:05:41.367029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:37.293 [2024-04-17 13:05:41.367353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.293 [2024-04-17 13:05:41.367476] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:37.293 [2024-04-17 13:05:41.367824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.293 [2024-04-17 13:05:41.368424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.293 [2024-04-17 13:05:41.368621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:37.293 [2024-04-17 13:05:41.368838] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:37.293 [2024-04-17 13:05:41.368982] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.293 pt2 00:21:37.293 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:37.293 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:37.293 13:05:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:37.552 [2024-04-17 13:05:41.639041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:37.553 [2024-04-17 13:05:41.639363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.553 [2024-04-17 13:05:41.639447] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:37.553 [2024-04-17 13:05:41.639684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.553 [2024-04-17 13:05:41.640245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.553 [2024-04-17 13:05:41.640422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:37.553 [2024-04-17 13:05:41.640628] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:37.553 [2024-04-17 13:05:41.640771] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:37.553 pt3 00:21:37.553 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:37.553 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:37.553 13:05:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:37.812 [2024-04-17 13:05:41.867123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:37.812 [2024-04-17 13:05:41.867419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.812 [2024-04-17 13:05:41.867511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:37.812 [2024-04-17 13:05:41.867800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.812 [2024-04-17 13:05:41.868346] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.812 [2024-04-17 13:05:41.868520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:37.812 [2024-04-17 13:05:41.868729] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:37.812 [2024-04-17 13:05:41.868860] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:37.812 [2024-04-17 13:05:41.869073] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:37.812 [2024-04-17 13:05:41.869218] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:37.812 [2024-04-17 13:05:41.869365] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:37.812 [2024-04-17 13:05:41.869739] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:37.812 [2024-04-17 13:05:41.869857] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:37.812 [2024-04-17 13:05:41.870089] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.812 pt4 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.812 13:05:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.071 13:05:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.071 "name": "raid_bdev1", 00:21:38.071 "uuid": "5e55a851-6a70-48f6-8d3e-436fbe5efb08", 00:21:38.071 "strip_size_kb": 64, 00:21:38.071 "state": "online", 00:21:38.071 "raid_level": "concat", 00:21:38.071 "superblock": true, 00:21:38.071 "num_base_bdevs": 4, 00:21:38.071 "num_base_bdevs_discovered": 4, 00:21:38.071 "num_base_bdevs_operational": 4, 00:21:38.071 "base_bdevs_list": [ 00:21:38.071 { 00:21:38.071 "name": "pt1", 00:21:38.071 "uuid": "7a9f2e75-9bf2-5843-83ef-65b73816993c", 00:21:38.071 "is_configured": true, 00:21:38.071 "data_offset": 2048, 00:21:38.071 "data_size": 63488 00:21:38.071 }, 00:21:38.071 { 00:21:38.071 "name": "pt2", 00:21:38.071 "uuid": "b7a98629-100b-50dc-822c-4ce6e20efa9b", 00:21:38.071 "is_configured": true, 00:21:38.071 "data_offset": 2048, 00:21:38.071 "data_size": 63488 00:21:38.071 }, 00:21:38.071 { 00:21:38.071 "name": "pt3", 00:21:38.071 "uuid": "b21173b5-09eb-557a-868d-3ceaa394c58f", 00:21:38.071 "is_configured": true, 00:21:38.071 "data_offset": 2048, 00:21:38.071 "data_size": 63488 00:21:38.071 }, 00:21:38.071 { 00:21:38.071 "name": "pt4", 00:21:38.071 "uuid": "7ea4f02e-21d7-564d-b48d-35bea1b16b92", 00:21:38.071 "is_configured": true, 00:21:38.071 "data_offset": 2048, 00:21:38.071 "data_size": 63488 00:21:38.071 } 00:21:38.071 ] 00:21:38.071 }' 00:21:38.071 13:05:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.071 13:05:42 -- common/autotest_common.sh@10 -- # set +x 00:21:39.008 13:05:42 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.008 13:05:42 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:39.266 [2024-04-17 13:05:43.223779] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.266 13:05:43 -- bdev/bdev_raid.sh@430 -- # '[' 5e55a851-6a70-48f6-8d3e-436fbe5efb08 '!=' 5e55a851-6a70-48f6-8d3e-436fbe5efb08 ']' 00:21:39.266 13:05:43 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:21:39.266 13:05:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:39.266 13:05:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:39.266 13:05:43 -- bdev/bdev_raid.sh@511 -- # killprocess 128488 00:21:39.266 13:05:43 -- common/autotest_common.sh@924 -- # '[' -z 128488 ']' 00:21:39.266 13:05:43 -- common/autotest_common.sh@928 -- # kill -0 128488 00:21:39.266 13:05:43 -- common/autotest_common.sh@929 -- # uname 00:21:39.266 13:05:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:39.266 13:05:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 128488 00:21:39.266 13:05:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:39.266 13:05:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:39.266 13:05:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 128488' 00:21:39.266 killing process with pid 128488 00:21:39.266 13:05:43 -- common/autotest_common.sh@943 -- # kill 128488 00:21:39.266 13:05:43 -- common/autotest_common.sh@948 -- # wait 128488 00:21:39.266 [2024-04-17 13:05:43.261967] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.266 [2024-04-17 13:05:43.262062] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.267 [2024-04-17 13:05:43.262137] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.267 [2024-04-17 13:05:43.262147] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:21:39.546 [2024-04-17 13:05:43.595018] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:40.926 ************************************ 00:21:40.926 END TEST raid_superblock_test 00:21:40.926 ************************************ 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:40.926 00:21:40.926 real 0m13.144s 00:21:40.926 user 0m23.226s 00:21:40.926 sys 0m1.410s 00:21:40.926 13:05:44 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:21:40.926 13:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:40.926 13:05:44 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:21:40.926 13:05:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:40.926 13:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:40.926 ************************************ 00:21:40.926 START TEST raid_state_function_test 00:21:40.926 ************************************ 00:21:40.926 13:05:44 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 4 false 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=128849 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:40.926 Process raid pid: 128849 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128849' 00:21:40.926 13:05:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128849 /var/tmp/spdk-raid.sock 00:21:40.926 13:05:44 -- common/autotest_common.sh@817 -- # '[' -z 128849 ']' 00:21:40.926 13:05:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:40.926 13:05:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.926 13:05:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:40.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:40.926 13:05:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.926 13:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:40.926 [2024-04-17 13:05:44.850561] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:21:40.926 [2024-04-17 13:05:44.851044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.926 [2024-04-17 13:05:45.030325] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.185 [2024-04-17 13:05:45.228516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.444 [2024-04-17 13:05:45.414446] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.702 13:05:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.702 13:05:45 -- common/autotest_common.sh@850 -- # return 0 00:21:41.702 13:05:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:41.960 [2024-04-17 13:05:46.008570] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:41.960 [2024-04-17 13:05:46.008801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:41.960 [2024-04-17 13:05:46.008905] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:41.960 [2024-04-17 13:05:46.008967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:41.960 [2024-04-17 13:05:46.009087] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:41.960 [2024-04-17 13:05:46.009165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:41.960 [2024-04-17 13:05:46.009299] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:41.961 [2024-04-17 13:05:46.009359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.961 13:05:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.220 13:05:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.220 "name": "Existed_Raid", 00:21:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.220 "strip_size_kb": 0, 00:21:42.220 "state": "configuring", 00:21:42.220 "raid_level": "raid1", 00:21:42.220 "superblock": false, 00:21:42.220 "num_base_bdevs": 4, 00:21:42.220 "num_base_bdevs_discovered": 0, 00:21:42.220 "num_base_bdevs_operational": 4, 00:21:42.220 "base_bdevs_list": [ 00:21:42.220 { 00:21:42.220 "name": "BaseBdev1", 00:21:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.220 "is_configured": false, 00:21:42.220 "data_offset": 0, 00:21:42.220 "data_size": 0 00:21:42.220 }, 00:21:42.220 { 00:21:42.220 "name": "BaseBdev2", 00:21:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.220 "is_configured": false, 00:21:42.220 "data_offset": 0, 00:21:42.220 "data_size": 0 00:21:42.220 }, 00:21:42.220 { 00:21:42.220 "name": "BaseBdev3", 00:21:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.220 "is_configured": false, 00:21:42.220 "data_offset": 0, 00:21:42.220 "data_size": 0 00:21:42.220 }, 00:21:42.220 { 00:21:42.220 "name": "BaseBdev4", 00:21:42.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.220 "is_configured": false, 00:21:42.220 "data_offset": 0, 00:21:42.220 "data_size": 0 00:21:42.220 } 00:21:42.220 ] 00:21:42.220 }' 00:21:42.220 13:05:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.220 13:05:46 -- common/autotest_common.sh@10 -- # set +x 00:21:43.158 13:05:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:43.158 [2024-04-17 13:05:47.176721] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.158 [2024-04-17 13:05:47.176981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:43.158 13:05:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:43.417 [2024-04-17 13:05:47.396816] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.417 [2024-04-17 13:05:47.397108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.417 [2024-04-17 13:05:47.397240] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.417 [2024-04-17 13:05:47.397310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.417 [2024-04-17 13:05:47.397490] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:43.417 [2024-04-17 13:05:47.397565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:43.417 [2024-04-17 13:05:47.397731] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:43.417 [2024-04-17 13:05:47.397790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:43.417 13:05:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:43.676 [2024-04-17 13:05:47.639922] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.676 BaseBdev1 00:21:43.676 13:05:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:43.676 13:05:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:43.676 13:05:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:43.676 13:05:47 -- common/autotest_common.sh@887 -- # local i 00:21:43.676 13:05:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:43.676 13:05:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:43.676 13:05:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.935 13:05:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.935 [ 00:21:43.935 { 00:21:43.935 "name": "BaseBdev1", 00:21:43.935 "aliases": [ 00:21:43.935 "57d991aa-1392-4605-9caa-20aa515e6327" 00:21:43.935 ], 00:21:43.935 "product_name": "Malloc disk", 00:21:43.935 "block_size": 512, 00:21:43.935 "num_blocks": 65536, 00:21:43.935 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:43.935 "assigned_rate_limits": { 00:21:43.935 "rw_ios_per_sec": 0, 00:21:43.935 "rw_mbytes_per_sec": 0, 00:21:43.935 "r_mbytes_per_sec": 0, 00:21:43.935 "w_mbytes_per_sec": 0 00:21:43.935 }, 00:21:43.935 "claimed": true, 00:21:43.935 "claim_type": "exclusive_write", 00:21:43.935 "zoned": false, 00:21:43.935 "supported_io_types": { 00:21:43.935 "read": true, 00:21:43.935 "write": true, 00:21:43.935 "unmap": true, 00:21:43.935 "write_zeroes": true, 00:21:43.935 "flush": true, 00:21:43.935 "reset": true, 00:21:43.935 "compare": false, 00:21:43.935 "compare_and_write": false, 00:21:43.935 "abort": true, 00:21:43.935 "nvme_admin": false, 00:21:43.935 "nvme_io": false 00:21:43.935 }, 00:21:43.935 "memory_domains": [ 00:21:43.935 { 00:21:43.935 "dma_device_id": "system", 00:21:43.935 "dma_device_type": 1 00:21:43.935 }, 00:21:43.935 { 00:21:43.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.935 "dma_device_type": 2 00:21:43.935 } 00:21:43.935 ], 00:21:43.935 "driver_specific": {} 00:21:43.935 } 00:21:43.935 ] 00:21:43.935 13:05:48 -- common/autotest_common.sh@893 -- # return 0 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.935 13:05:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.194 13:05:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.195 "name": "Existed_Raid", 00:21:44.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.195 "strip_size_kb": 0, 00:21:44.195 "state": "configuring", 00:21:44.195 "raid_level": "raid1", 00:21:44.195 "superblock": false, 00:21:44.195 "num_base_bdevs": 4, 00:21:44.195 "num_base_bdevs_discovered": 1, 00:21:44.195 "num_base_bdevs_operational": 4, 00:21:44.195 "base_bdevs_list": [ 00:21:44.195 { 00:21:44.195 "name": "BaseBdev1", 00:21:44.195 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:44.195 "is_configured": true, 00:21:44.195 "data_offset": 0, 00:21:44.195 "data_size": 65536 00:21:44.195 }, 00:21:44.195 { 00:21:44.195 "name": "BaseBdev2", 00:21:44.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.195 "is_configured": false, 00:21:44.195 "data_offset": 0, 00:21:44.195 "data_size": 0 00:21:44.195 }, 00:21:44.195 { 00:21:44.195 "name": "BaseBdev3", 00:21:44.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.195 "is_configured": false, 00:21:44.195 "data_offset": 0, 00:21:44.195 "data_size": 0 00:21:44.195 }, 00:21:44.195 { 00:21:44.195 "name": "BaseBdev4", 00:21:44.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.195 "is_configured": false, 00:21:44.195 "data_offset": 0, 00:21:44.195 "data_size": 0 00:21:44.195 } 00:21:44.195 ] 00:21:44.195 }' 00:21:44.195 13:05:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.195 13:05:48 -- common/autotest_common.sh@10 -- # set +x 00:21:45.130 13:05:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:45.130 [2024-04-17 13:05:49.160424] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:45.130 [2024-04-17 13:05:49.160699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:45.130 13:05:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:45.130 13:05:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:45.388 [2024-04-17 13:05:49.364528] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.388 [2024-04-17 13:05:49.366890] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.388 [2024-04-17 13:05:49.367149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.388 [2024-04-17 13:05:49.367280] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.388 [2024-04-17 13:05:49.367414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.388 [2024-04-17 13:05:49.367520] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:45.388 [2024-04-17 13:05:49.367658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.388 13:05:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.646 13:05:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.646 "name": "Existed_Raid", 00:21:45.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.646 "strip_size_kb": 0, 00:21:45.646 "state": "configuring", 00:21:45.646 "raid_level": "raid1", 00:21:45.646 "superblock": false, 00:21:45.646 "num_base_bdevs": 4, 00:21:45.646 "num_base_bdevs_discovered": 1, 00:21:45.646 "num_base_bdevs_operational": 4, 00:21:45.646 "base_bdevs_list": [ 00:21:45.646 { 00:21:45.646 "name": "BaseBdev1", 00:21:45.646 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:45.646 "is_configured": true, 00:21:45.646 "data_offset": 0, 00:21:45.646 "data_size": 65536 00:21:45.646 }, 00:21:45.646 { 00:21:45.646 "name": "BaseBdev2", 00:21:45.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.646 "is_configured": false, 00:21:45.646 "data_offset": 0, 00:21:45.646 "data_size": 0 00:21:45.646 }, 00:21:45.646 { 00:21:45.646 "name": "BaseBdev3", 00:21:45.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.646 "is_configured": false, 00:21:45.646 "data_offset": 0, 00:21:45.646 "data_size": 0 00:21:45.646 }, 00:21:45.646 { 00:21:45.646 "name": "BaseBdev4", 00:21:45.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.646 "is_configured": false, 00:21:45.646 "data_offset": 0, 00:21:45.646 "data_size": 0 00:21:45.646 } 00:21:45.646 ] 00:21:45.646 }' 00:21:45.646 13:05:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.646 13:05:49 -- common/autotest_common.sh@10 -- # set +x 00:21:46.213 13:05:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:46.470 [2024-04-17 13:05:50.586774] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.470 BaseBdev2 00:21:46.470 13:05:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:46.470 13:05:50 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:21:46.470 13:05:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:46.470 13:05:50 -- common/autotest_common.sh@887 -- # local i 00:21:46.470 13:05:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:46.471 13:05:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:46.471 13:05:50 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:46.753 13:05:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:47.039 [ 00:21:47.039 { 00:21:47.039 "name": "BaseBdev2", 00:21:47.039 "aliases": [ 00:21:47.039 "0919fb57-da5b-419f-965d-a9a83f9a8674" 00:21:47.039 ], 00:21:47.039 "product_name": "Malloc disk", 00:21:47.039 "block_size": 512, 00:21:47.039 "num_blocks": 65536, 00:21:47.039 "uuid": "0919fb57-da5b-419f-965d-a9a83f9a8674", 00:21:47.039 "assigned_rate_limits": { 00:21:47.039 "rw_ios_per_sec": 0, 00:21:47.039 "rw_mbytes_per_sec": 0, 00:21:47.039 "r_mbytes_per_sec": 0, 00:21:47.039 "w_mbytes_per_sec": 0 00:21:47.039 }, 00:21:47.039 "claimed": true, 00:21:47.039 "claim_type": "exclusive_write", 00:21:47.039 "zoned": false, 00:21:47.039 "supported_io_types": { 00:21:47.039 "read": true, 00:21:47.039 "write": true, 00:21:47.039 "unmap": true, 00:21:47.039 "write_zeroes": true, 00:21:47.039 "flush": true, 00:21:47.039 "reset": true, 00:21:47.039 "compare": false, 00:21:47.039 "compare_and_write": false, 00:21:47.039 "abort": true, 00:21:47.039 "nvme_admin": false, 00:21:47.039 "nvme_io": false 00:21:47.039 }, 00:21:47.039 "memory_domains": [ 00:21:47.039 { 00:21:47.039 "dma_device_id": "system", 00:21:47.039 "dma_device_type": 1 00:21:47.039 }, 00:21:47.039 { 00:21:47.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.039 "dma_device_type": 2 00:21:47.039 } 00:21:47.039 ], 00:21:47.039 "driver_specific": {} 00:21:47.039 } 00:21:47.039 ] 00:21:47.039 13:05:51 -- common/autotest_common.sh@893 -- # return 0 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.039 13:05:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.301 13:05:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.302 "name": "Existed_Raid", 00:21:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.302 "strip_size_kb": 0, 00:21:47.302 "state": "configuring", 00:21:47.302 "raid_level": "raid1", 00:21:47.302 "superblock": false, 00:21:47.302 "num_base_bdevs": 4, 00:21:47.302 "num_base_bdevs_discovered": 2, 00:21:47.302 "num_base_bdevs_operational": 4, 00:21:47.302 "base_bdevs_list": [ 00:21:47.302 { 00:21:47.302 "name": "BaseBdev1", 00:21:47.302 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:47.302 "is_configured": true, 00:21:47.302 "data_offset": 0, 00:21:47.302 "data_size": 65536 00:21:47.302 }, 00:21:47.302 { 00:21:47.302 "name": "BaseBdev2", 00:21:47.302 "uuid": "0919fb57-da5b-419f-965d-a9a83f9a8674", 00:21:47.302 "is_configured": true, 00:21:47.302 "data_offset": 0, 00:21:47.302 "data_size": 65536 00:21:47.302 }, 00:21:47.302 { 00:21:47.302 "name": "BaseBdev3", 00:21:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.302 "is_configured": false, 00:21:47.302 "data_offset": 0, 00:21:47.302 "data_size": 0 00:21:47.302 }, 00:21:47.302 { 00:21:47.302 "name": "BaseBdev4", 00:21:47.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.302 "is_configured": false, 00:21:47.302 "data_offset": 0, 00:21:47.302 "data_size": 0 00:21:47.302 } 00:21:47.302 ] 00:21:47.302 }' 00:21:47.302 13:05:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.302 13:05:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.869 13:05:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:48.437 [2024-04-17 13:05:52.282586] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.437 BaseBdev3 00:21:48.437 13:05:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:48.437 13:05:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:21:48.437 13:05:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:48.437 13:05:52 -- common/autotest_common.sh@887 -- # local i 00:21:48.437 13:05:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:48.437 13:05:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:48.437 13:05:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.437 13:05:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:48.696 [ 00:21:48.696 { 00:21:48.696 "name": "BaseBdev3", 00:21:48.696 "aliases": [ 00:21:48.696 "f0f80154-53f9-4248-9484-9c56389b4908" 00:21:48.696 ], 00:21:48.696 "product_name": "Malloc disk", 00:21:48.696 "block_size": 512, 00:21:48.696 "num_blocks": 65536, 00:21:48.696 "uuid": "f0f80154-53f9-4248-9484-9c56389b4908", 00:21:48.696 "assigned_rate_limits": { 00:21:48.696 "rw_ios_per_sec": 0, 00:21:48.696 "rw_mbytes_per_sec": 0, 00:21:48.696 "r_mbytes_per_sec": 0, 00:21:48.696 "w_mbytes_per_sec": 0 00:21:48.696 }, 00:21:48.696 "claimed": true, 00:21:48.696 "claim_type": "exclusive_write", 00:21:48.696 "zoned": false, 00:21:48.696 "supported_io_types": { 00:21:48.696 "read": true, 00:21:48.696 "write": true, 00:21:48.696 "unmap": true, 00:21:48.696 "write_zeroes": true, 00:21:48.696 "flush": true, 00:21:48.696 "reset": true, 00:21:48.696 "compare": false, 00:21:48.696 "compare_and_write": false, 00:21:48.696 "abort": true, 00:21:48.696 "nvme_admin": false, 00:21:48.696 "nvme_io": false 00:21:48.696 }, 00:21:48.696 "memory_domains": [ 00:21:48.696 { 00:21:48.696 "dma_device_id": "system", 00:21:48.696 "dma_device_type": 1 00:21:48.696 }, 00:21:48.696 { 00:21:48.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.696 "dma_device_type": 2 00:21:48.696 } 00:21:48.696 ], 00:21:48.696 "driver_specific": {} 00:21:48.696 } 00:21:48.696 ] 00:21:48.696 13:05:52 -- common/autotest_common.sh@893 -- # return 0 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.696 13:05:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.955 13:05:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.955 "name": "Existed_Raid", 00:21:48.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.955 "strip_size_kb": 0, 00:21:48.955 "state": "configuring", 00:21:48.955 "raid_level": "raid1", 00:21:48.955 "superblock": false, 00:21:48.955 "num_base_bdevs": 4, 00:21:48.955 "num_base_bdevs_discovered": 3, 00:21:48.955 "num_base_bdevs_operational": 4, 00:21:48.955 "base_bdevs_list": [ 00:21:48.955 { 00:21:48.955 "name": "BaseBdev1", 00:21:48.955 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:48.955 "is_configured": true, 00:21:48.955 "data_offset": 0, 00:21:48.955 "data_size": 65536 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "name": "BaseBdev2", 00:21:48.955 "uuid": "0919fb57-da5b-419f-965d-a9a83f9a8674", 00:21:48.955 "is_configured": true, 00:21:48.955 "data_offset": 0, 00:21:48.955 "data_size": 65536 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "name": "BaseBdev3", 00:21:48.955 "uuid": "f0f80154-53f9-4248-9484-9c56389b4908", 00:21:48.955 "is_configured": true, 00:21:48.955 "data_offset": 0, 00:21:48.955 "data_size": 65536 00:21:48.955 }, 00:21:48.955 { 00:21:48.955 "name": "BaseBdev4", 00:21:48.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.955 "is_configured": false, 00:21:48.955 "data_offset": 0, 00:21:48.955 "data_size": 0 00:21:48.955 } 00:21:48.955 ] 00:21:48.955 }' 00:21:48.955 13:05:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.955 13:05:53 -- common/autotest_common.sh@10 -- # set +x 00:21:49.522 13:05:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:49.781 [2024-04-17 13:05:53.901140] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:49.781 [2024-04-17 13:05:53.901443] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:49.781 [2024-04-17 13:05:53.901485] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:49.781 [2024-04-17 13:05:53.901722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:49.781 [2024-04-17 13:05:53.902263] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:49.781 [2024-04-17 13:05:53.902408] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:49.781 [2024-04-17 13:05:53.902801] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.781 BaseBdev4 00:21:49.781 13:05:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:49.781 13:05:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:21:49.781 13:05:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:49.781 13:05:53 -- common/autotest_common.sh@887 -- # local i 00:21:49.781 13:05:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:49.781 13:05:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:49.781 13:05:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.040 13:05:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:50.299 [ 00:21:50.299 { 00:21:50.299 "name": "BaseBdev4", 00:21:50.299 "aliases": [ 00:21:50.299 "dafb2b6b-6858-4829-97a6-eca6561b5415" 00:21:50.299 ], 00:21:50.299 "product_name": "Malloc disk", 00:21:50.299 "block_size": 512, 00:21:50.299 "num_blocks": 65536, 00:21:50.299 "uuid": "dafb2b6b-6858-4829-97a6-eca6561b5415", 00:21:50.299 "assigned_rate_limits": { 00:21:50.299 "rw_ios_per_sec": 0, 00:21:50.299 "rw_mbytes_per_sec": 0, 00:21:50.299 "r_mbytes_per_sec": 0, 00:21:50.299 "w_mbytes_per_sec": 0 00:21:50.299 }, 00:21:50.299 "claimed": true, 00:21:50.299 "claim_type": "exclusive_write", 00:21:50.299 "zoned": false, 00:21:50.299 "supported_io_types": { 00:21:50.299 "read": true, 00:21:50.299 "write": true, 00:21:50.299 "unmap": true, 00:21:50.299 "write_zeroes": true, 00:21:50.299 "flush": true, 00:21:50.299 "reset": true, 00:21:50.299 "compare": false, 00:21:50.299 "compare_and_write": false, 00:21:50.299 "abort": true, 00:21:50.299 "nvme_admin": false, 00:21:50.299 "nvme_io": false 00:21:50.299 }, 00:21:50.299 "memory_domains": [ 00:21:50.299 { 00:21:50.299 "dma_device_id": "system", 00:21:50.299 "dma_device_type": 1 00:21:50.299 }, 00:21:50.299 { 00:21:50.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.299 "dma_device_type": 2 00:21:50.299 } 00:21:50.299 ], 00:21:50.299 "driver_specific": {} 00:21:50.299 } 00:21:50.299 ] 00:21:50.299 13:05:54 -- common/autotest_common.sh@893 -- # return 0 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.300 13:05:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.558 13:05:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.558 "name": "Existed_Raid", 00:21:50.558 "uuid": "b44d99d8-7fb1-4d72-ad88-3954cc40ab9d", 00:21:50.558 "strip_size_kb": 0, 00:21:50.558 "state": "online", 00:21:50.558 "raid_level": "raid1", 00:21:50.558 "superblock": false, 00:21:50.558 "num_base_bdevs": 4, 00:21:50.558 "num_base_bdevs_discovered": 4, 00:21:50.558 "num_base_bdevs_operational": 4, 00:21:50.558 "base_bdevs_list": [ 00:21:50.558 { 00:21:50.558 "name": "BaseBdev1", 00:21:50.558 "uuid": "57d991aa-1392-4605-9caa-20aa515e6327", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 0, 00:21:50.558 "data_size": 65536 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "name": "BaseBdev2", 00:21:50.558 "uuid": "0919fb57-da5b-419f-965d-a9a83f9a8674", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 0, 00:21:50.558 "data_size": 65536 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "name": "BaseBdev3", 00:21:50.558 "uuid": "f0f80154-53f9-4248-9484-9c56389b4908", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 0, 00:21:50.558 "data_size": 65536 00:21:50.558 }, 00:21:50.558 { 00:21:50.558 "name": "BaseBdev4", 00:21:50.558 "uuid": "dafb2b6b-6858-4829-97a6-eca6561b5415", 00:21:50.558 "is_configured": true, 00:21:50.558 "data_offset": 0, 00:21:50.558 "data_size": 65536 00:21:50.558 } 00:21:50.558 ] 00:21:50.558 }' 00:21:50.558 13:05:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.558 13:05:54 -- common/autotest_common.sh@10 -- # set +x 00:21:51.123 13:05:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:51.382 [2024-04-17 13:05:55.405649] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:51.382 13:05:55 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:51.382 13:05:55 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:21:51.382 13:05:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:51.382 13:05:55 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.383 13:05:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.641 13:05:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.641 "name": "Existed_Raid", 00:21:51.641 "uuid": "b44d99d8-7fb1-4d72-ad88-3954cc40ab9d", 00:21:51.641 "strip_size_kb": 0, 00:21:51.641 "state": "online", 00:21:51.641 "raid_level": "raid1", 00:21:51.641 "superblock": false, 00:21:51.641 "num_base_bdevs": 4, 00:21:51.641 "num_base_bdevs_discovered": 3, 00:21:51.641 "num_base_bdevs_operational": 3, 00:21:51.641 "base_bdevs_list": [ 00:21:51.641 { 00:21:51.641 "name": null, 00:21:51.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.641 "is_configured": false, 00:21:51.641 "data_offset": 0, 00:21:51.641 "data_size": 65536 00:21:51.641 }, 00:21:51.641 { 00:21:51.641 "name": "BaseBdev2", 00:21:51.641 "uuid": "0919fb57-da5b-419f-965d-a9a83f9a8674", 00:21:51.641 "is_configured": true, 00:21:51.641 "data_offset": 0, 00:21:51.641 "data_size": 65536 00:21:51.641 }, 00:21:51.641 { 00:21:51.641 "name": "BaseBdev3", 00:21:51.641 "uuid": "f0f80154-53f9-4248-9484-9c56389b4908", 00:21:51.641 "is_configured": true, 00:21:51.641 "data_offset": 0, 00:21:51.641 "data_size": 65536 00:21:51.641 }, 00:21:51.641 { 00:21:51.641 "name": "BaseBdev4", 00:21:51.641 "uuid": "dafb2b6b-6858-4829-97a6-eca6561b5415", 00:21:51.642 "is_configured": true, 00:21:51.642 "data_offset": 0, 00:21:51.642 "data_size": 65536 00:21:51.642 } 00:21:51.642 ] 00:21:51.642 }' 00:21:51.642 13:05:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.642 13:05:55 -- common/autotest_common.sh@10 -- # set +x 00:21:52.578 13:05:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:52.578 13:05:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:52.578 13:05:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.578 13:05:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:52.837 13:05:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:52.837 13:05:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:52.837 13:05:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:52.837 [2024-04-17 13:05:56.966434] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:53.096 13:05:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:53.096 13:05:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:53.096 13:05:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.096 13:05:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:53.354 13:05:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:53.354 13:05:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.354 13:05:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:53.613 [2024-04-17 13:05:57.550476] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:53.613 13:05:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:53.613 13:05:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:53.613 13:05:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.613 13:05:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:53.872 13:05:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:53.872 13:05:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:53.872 13:05:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:54.130 [2024-04-17 13:05:58.088532] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:54.130 [2024-04-17 13:05:58.088814] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.130 [2024-04-17 13:05:58.175642] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.130 [2024-04-17 13:05:58.176077] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.130 [2024-04-17 13:05:58.176194] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:54.130 13:05:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:54.130 13:05:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:54.130 13:05:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.130 13:05:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:54.390 13:05:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:54.390 13:05:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:54.390 13:05:58 -- bdev/bdev_raid.sh@287 -- # killprocess 128849 00:21:54.390 13:05:58 -- common/autotest_common.sh@924 -- # '[' -z 128849 ']' 00:21:54.390 13:05:58 -- common/autotest_common.sh@928 -- # kill -0 128849 00:21:54.390 13:05:58 -- common/autotest_common.sh@929 -- # uname 00:21:54.390 13:05:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:54.390 13:05:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 128849 00:21:54.390 killing process with pid 128849 00:21:54.390 13:05:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:54.390 13:05:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:54.390 13:05:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 128849' 00:21:54.390 13:05:58 -- common/autotest_common.sh@943 -- # kill 128849 00:21:54.390 13:05:58 -- common/autotest_common.sh@948 -- # wait 128849 00:21:54.390 [2024-04-17 13:05:58.435212] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:54.390 [2024-04-17 13:05:58.435323] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:55.765 ************************************ 00:21:55.765 END TEST raid_state_function_test 00:21:55.765 ************************************ 00:21:55.765 13:05:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:55.765 00:21:55.765 real 0m14.728s 00:21:55.765 user 0m26.293s 00:21:55.765 sys 0m1.731s 00:21:55.765 13:05:59 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:21:55.765 13:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:55.765 13:05:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:21:55.765 13:05:59 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:21:55.765 13:05:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:55.765 13:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:55.765 ************************************ 00:21:55.765 START TEST raid_state_function_test_sb 00:21:55.765 ************************************ 00:21:55.765 13:05:59 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid1 4 true 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=129313 00:21:55.766 Process raid pid: 129313 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129313' 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129313 /var/tmp/spdk-raid.sock 00:21:55.766 13:05:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:55.766 13:05:59 -- common/autotest_common.sh@817 -- # '[' -z 129313 ']' 00:21:55.766 13:05:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:55.766 13:05:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.766 13:05:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:55.766 13:05:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:55.766 13:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:55.766 [2024-04-17 13:05:59.637197] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:21:55.766 [2024-04-17 13:05:59.637405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.766 [2024-04-17 13:05:59.808497] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.024 [2024-04-17 13:06:00.041223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.282 [2024-04-17 13:06:00.238666] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:56.540 13:06:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:56.540 13:06:00 -- common/autotest_common.sh@850 -- # return 0 00:21:56.540 13:06:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:56.799 [2024-04-17 13:06:00.770490] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:56.799 [2024-04-17 13:06:00.770823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:56.799 [2024-04-17 13:06:00.770947] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:56.799 [2024-04-17 13:06:00.771011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:56.799 [2024-04-17 13:06:00.771181] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:56.799 [2024-04-17 13:06:00.771263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:56.799 [2024-04-17 13:06:00.771363] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:56.799 [2024-04-17 13:06:00.771441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.799 13:06:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.057 13:06:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.057 "name": "Existed_Raid", 00:21:57.057 "uuid": "9ec8fd27-e0bc-425e-9567-aeacdb9f3a77", 00:21:57.057 "strip_size_kb": 0, 00:21:57.057 "state": "configuring", 00:21:57.057 "raid_level": "raid1", 00:21:57.057 "superblock": true, 00:21:57.057 "num_base_bdevs": 4, 00:21:57.057 "num_base_bdevs_discovered": 0, 00:21:57.057 "num_base_bdevs_operational": 4, 00:21:57.057 "base_bdevs_list": [ 00:21:57.057 { 00:21:57.057 "name": "BaseBdev1", 00:21:57.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.057 "is_configured": false, 00:21:57.057 "data_offset": 0, 00:21:57.057 "data_size": 0 00:21:57.057 }, 00:21:57.057 { 00:21:57.057 "name": "BaseBdev2", 00:21:57.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.057 "is_configured": false, 00:21:57.057 "data_offset": 0, 00:21:57.057 "data_size": 0 00:21:57.057 }, 00:21:57.057 { 00:21:57.057 "name": "BaseBdev3", 00:21:57.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.057 "is_configured": false, 00:21:57.057 "data_offset": 0, 00:21:57.057 "data_size": 0 00:21:57.057 }, 00:21:57.057 { 00:21:57.057 "name": "BaseBdev4", 00:21:57.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.057 "is_configured": false, 00:21:57.057 "data_offset": 0, 00:21:57.057 "data_size": 0 00:21:57.057 } 00:21:57.057 ] 00:21:57.057 }' 00:21:57.057 13:06:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.057 13:06:01 -- common/autotest_common.sh@10 -- # set +x 00:21:57.623 13:06:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:57.882 [2024-04-17 13:06:01.962570] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:57.882 [2024-04-17 13:06:01.962845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:57.882 13:06:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:58.141 [2024-04-17 13:06:02.182709] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.141 [2024-04-17 13:06:02.182952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.141 [2024-04-17 13:06:02.183085] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:58.141 [2024-04-17 13:06:02.183210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:58.141 [2024-04-17 13:06:02.183320] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:58.141 [2024-04-17 13:06:02.183432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:58.141 [2024-04-17 13:06:02.183528] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:58.141 [2024-04-17 13:06:02.183596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:58.141 13:06:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.400 [2024-04-17 13:06:02.430643] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.400 BaseBdev1 00:21:58.400 13:06:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:58.400 13:06:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:58.400 13:06:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:58.400 13:06:02 -- common/autotest_common.sh@887 -- # local i 00:21:58.400 13:06:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:58.400 13:06:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:58.400 13:06:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:58.673 13:06:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:58.931 [ 00:21:58.931 { 00:21:58.931 "name": "BaseBdev1", 00:21:58.931 "aliases": [ 00:21:58.931 "7c528496-314b-4271-8ec4-8b781357d044" 00:21:58.931 ], 00:21:58.931 "product_name": "Malloc disk", 00:21:58.931 "block_size": 512, 00:21:58.931 "num_blocks": 65536, 00:21:58.931 "uuid": "7c528496-314b-4271-8ec4-8b781357d044", 00:21:58.931 "assigned_rate_limits": { 00:21:58.931 "rw_ios_per_sec": 0, 00:21:58.931 "rw_mbytes_per_sec": 0, 00:21:58.931 "r_mbytes_per_sec": 0, 00:21:58.931 "w_mbytes_per_sec": 0 00:21:58.931 }, 00:21:58.931 "claimed": true, 00:21:58.931 "claim_type": "exclusive_write", 00:21:58.931 "zoned": false, 00:21:58.931 "supported_io_types": { 00:21:58.931 "read": true, 00:21:58.931 "write": true, 00:21:58.931 "unmap": true, 00:21:58.931 "write_zeroes": true, 00:21:58.931 "flush": true, 00:21:58.931 "reset": true, 00:21:58.931 "compare": false, 00:21:58.931 "compare_and_write": false, 00:21:58.931 "abort": true, 00:21:58.931 "nvme_admin": false, 00:21:58.931 "nvme_io": false 00:21:58.931 }, 00:21:58.931 "memory_domains": [ 00:21:58.931 { 00:21:58.931 "dma_device_id": "system", 00:21:58.931 "dma_device_type": 1 00:21:58.931 }, 00:21:58.931 { 00:21:58.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.931 "dma_device_type": 2 00:21:58.931 } 00:21:58.931 ], 00:21:58.931 "driver_specific": {} 00:21:58.931 } 00:21:58.931 ] 00:21:58.931 13:06:02 -- common/autotest_common.sh@893 -- # return 0 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.931 13:06:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.190 13:06:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.190 "name": "Existed_Raid", 00:21:59.190 "uuid": "5cea25b0-9a3e-4f27-9289-b4ba3a36bab4", 00:21:59.190 "strip_size_kb": 0, 00:21:59.190 "state": "configuring", 00:21:59.190 "raid_level": "raid1", 00:21:59.190 "superblock": true, 00:21:59.190 "num_base_bdevs": 4, 00:21:59.190 "num_base_bdevs_discovered": 1, 00:21:59.190 "num_base_bdevs_operational": 4, 00:21:59.190 "base_bdevs_list": [ 00:21:59.190 { 00:21:59.190 "name": "BaseBdev1", 00:21:59.190 "uuid": "7c528496-314b-4271-8ec4-8b781357d044", 00:21:59.190 "is_configured": true, 00:21:59.190 "data_offset": 2048, 00:21:59.190 "data_size": 63488 00:21:59.190 }, 00:21:59.190 { 00:21:59.190 "name": "BaseBdev2", 00:21:59.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.190 "is_configured": false, 00:21:59.190 "data_offset": 0, 00:21:59.190 "data_size": 0 00:21:59.190 }, 00:21:59.190 { 00:21:59.190 "name": "BaseBdev3", 00:21:59.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.190 "is_configured": false, 00:21:59.190 "data_offset": 0, 00:21:59.190 "data_size": 0 00:21:59.190 }, 00:21:59.190 { 00:21:59.190 "name": "BaseBdev4", 00:21:59.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.190 "is_configured": false, 00:21:59.190 "data_offset": 0, 00:21:59.190 "data_size": 0 00:21:59.190 } 00:21:59.190 ] 00:21:59.190 }' 00:21:59.190 13:06:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.190 13:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:59.756 13:06:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:00.015 [2024-04-17 13:06:04.011163] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:00.015 [2024-04-17 13:06:04.011491] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:00.015 13:06:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:00.015 13:06:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:00.276 13:06:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:00.535 BaseBdev1 00:22:00.535 13:06:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:00.535 13:06:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:00.535 13:06:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:00.535 13:06:04 -- common/autotest_common.sh@887 -- # local i 00:22:00.535 13:06:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:00.535 13:06:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:00.535 13:06:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.794 13:06:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:01.051 [ 00:22:01.051 { 00:22:01.051 "name": "BaseBdev1", 00:22:01.051 "aliases": [ 00:22:01.051 "4000d1cd-8429-406c-a92c-bc0309a88a7d" 00:22:01.051 ], 00:22:01.051 "product_name": "Malloc disk", 00:22:01.051 "block_size": 512, 00:22:01.051 "num_blocks": 65536, 00:22:01.051 "uuid": "4000d1cd-8429-406c-a92c-bc0309a88a7d", 00:22:01.051 "assigned_rate_limits": { 00:22:01.051 "rw_ios_per_sec": 0, 00:22:01.051 "rw_mbytes_per_sec": 0, 00:22:01.051 "r_mbytes_per_sec": 0, 00:22:01.051 "w_mbytes_per_sec": 0 00:22:01.051 }, 00:22:01.051 "claimed": false, 00:22:01.051 "zoned": false, 00:22:01.051 "supported_io_types": { 00:22:01.051 "read": true, 00:22:01.051 "write": true, 00:22:01.051 "unmap": true, 00:22:01.051 "write_zeroes": true, 00:22:01.051 "flush": true, 00:22:01.051 "reset": true, 00:22:01.051 "compare": false, 00:22:01.051 "compare_and_write": false, 00:22:01.051 "abort": true, 00:22:01.051 "nvme_admin": false, 00:22:01.051 "nvme_io": false 00:22:01.051 }, 00:22:01.051 "memory_domains": [ 00:22:01.051 { 00:22:01.051 "dma_device_id": "system", 00:22:01.051 "dma_device_type": 1 00:22:01.051 }, 00:22:01.051 { 00:22:01.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.051 "dma_device_type": 2 00:22:01.051 } 00:22:01.051 ], 00:22:01.051 "driver_specific": {} 00:22:01.051 } 00:22:01.051 ] 00:22:01.051 13:06:05 -- common/autotest_common.sh@893 -- # return 0 00:22:01.051 13:06:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:01.309 [2024-04-17 13:06:05.310064] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.309 [2024-04-17 13:06:05.312533] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:01.309 [2024-04-17 13:06:05.312734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:01.309 [2024-04-17 13:06:05.312866] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:01.309 [2024-04-17 13:06:05.312931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:01.309 [2024-04-17 13:06:05.313031] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:01.309 [2024-04-17 13:06:05.313087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.309 13:06:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.567 13:06:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.567 "name": "Existed_Raid", 00:22:01.567 "uuid": "d2281bec-2f21-4467-8abf-749a65720539", 00:22:01.567 "strip_size_kb": 0, 00:22:01.567 "state": "configuring", 00:22:01.567 "raid_level": "raid1", 00:22:01.567 "superblock": true, 00:22:01.567 "num_base_bdevs": 4, 00:22:01.567 "num_base_bdevs_discovered": 1, 00:22:01.567 "num_base_bdevs_operational": 4, 00:22:01.567 "base_bdevs_list": [ 00:22:01.567 { 00:22:01.567 "name": "BaseBdev1", 00:22:01.567 "uuid": "4000d1cd-8429-406c-a92c-bc0309a88a7d", 00:22:01.567 "is_configured": true, 00:22:01.567 "data_offset": 2048, 00:22:01.567 "data_size": 63488 00:22:01.567 }, 00:22:01.567 { 00:22:01.567 "name": "BaseBdev2", 00:22:01.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.567 "is_configured": false, 00:22:01.567 "data_offset": 0, 00:22:01.567 "data_size": 0 00:22:01.567 }, 00:22:01.567 { 00:22:01.567 "name": "BaseBdev3", 00:22:01.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.568 "is_configured": false, 00:22:01.568 "data_offset": 0, 00:22:01.568 "data_size": 0 00:22:01.568 }, 00:22:01.568 { 00:22:01.568 "name": "BaseBdev4", 00:22:01.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.568 "is_configured": false, 00:22:01.568 "data_offset": 0, 00:22:01.568 "data_size": 0 00:22:01.568 } 00:22:01.568 ] 00:22:01.568 }' 00:22:01.568 13:06:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.568 13:06:05 -- common/autotest_common.sh@10 -- # set +x 00:22:02.502 13:06:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.502 [2024-04-17 13:06:06.607133] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.502 BaseBdev2 00:22:02.502 13:06:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:02.502 13:06:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:22:02.502 13:06:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:02.502 13:06:06 -- common/autotest_common.sh@887 -- # local i 00:22:02.502 13:06:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:02.502 13:06:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:02.502 13:06:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:02.761 13:06:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:03.064 [ 00:22:03.064 { 00:22:03.064 "name": "BaseBdev2", 00:22:03.064 "aliases": [ 00:22:03.064 "47a79fe0-74ab-4873-8543-5de32460cabc" 00:22:03.064 ], 00:22:03.064 "product_name": "Malloc disk", 00:22:03.064 "block_size": 512, 00:22:03.064 "num_blocks": 65536, 00:22:03.064 "uuid": "47a79fe0-74ab-4873-8543-5de32460cabc", 00:22:03.064 "assigned_rate_limits": { 00:22:03.064 "rw_ios_per_sec": 0, 00:22:03.064 "rw_mbytes_per_sec": 0, 00:22:03.064 "r_mbytes_per_sec": 0, 00:22:03.064 "w_mbytes_per_sec": 0 00:22:03.064 }, 00:22:03.064 "claimed": true, 00:22:03.064 "claim_type": "exclusive_write", 00:22:03.064 "zoned": false, 00:22:03.064 "supported_io_types": { 00:22:03.064 "read": true, 00:22:03.064 "write": true, 00:22:03.064 "unmap": true, 00:22:03.064 "write_zeroes": true, 00:22:03.064 "flush": true, 00:22:03.064 "reset": true, 00:22:03.065 "compare": false, 00:22:03.065 "compare_and_write": false, 00:22:03.065 "abort": true, 00:22:03.065 "nvme_admin": false, 00:22:03.065 "nvme_io": false 00:22:03.065 }, 00:22:03.065 "memory_domains": [ 00:22:03.065 { 00:22:03.065 "dma_device_id": "system", 00:22:03.065 "dma_device_type": 1 00:22:03.065 }, 00:22:03.065 { 00:22:03.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.065 "dma_device_type": 2 00:22:03.065 } 00:22:03.065 ], 00:22:03.065 "driver_specific": {} 00:22:03.065 } 00:22:03.065 ] 00:22:03.065 13:06:07 -- common/autotest_common.sh@893 -- # return 0 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.065 13:06:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.324 13:06:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.324 "name": "Existed_Raid", 00:22:03.324 "uuid": "d2281bec-2f21-4467-8abf-749a65720539", 00:22:03.324 "strip_size_kb": 0, 00:22:03.324 "state": "configuring", 00:22:03.324 "raid_level": "raid1", 00:22:03.324 "superblock": true, 00:22:03.324 "num_base_bdevs": 4, 00:22:03.324 "num_base_bdevs_discovered": 2, 00:22:03.324 "num_base_bdevs_operational": 4, 00:22:03.324 "base_bdevs_list": [ 00:22:03.324 { 00:22:03.324 "name": "BaseBdev1", 00:22:03.324 "uuid": "4000d1cd-8429-406c-a92c-bc0309a88a7d", 00:22:03.324 "is_configured": true, 00:22:03.324 "data_offset": 2048, 00:22:03.324 "data_size": 63488 00:22:03.324 }, 00:22:03.324 { 00:22:03.324 "name": "BaseBdev2", 00:22:03.324 "uuid": "47a79fe0-74ab-4873-8543-5de32460cabc", 00:22:03.324 "is_configured": true, 00:22:03.324 "data_offset": 2048, 00:22:03.324 "data_size": 63488 00:22:03.324 }, 00:22:03.324 { 00:22:03.324 "name": "BaseBdev3", 00:22:03.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.324 "is_configured": false, 00:22:03.324 "data_offset": 0, 00:22:03.324 "data_size": 0 00:22:03.324 }, 00:22:03.324 { 00:22:03.324 "name": "BaseBdev4", 00:22:03.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.324 "is_configured": false, 00:22:03.324 "data_offset": 0, 00:22:03.324 "data_size": 0 00:22:03.324 } 00:22:03.324 ] 00:22:03.324 }' 00:22:03.324 13:06:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.324 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:22:03.892 13:06:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:04.150 [2024-04-17 13:06:08.295434] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:04.409 BaseBdev3 00:22:04.409 13:06:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:04.409 13:06:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:22:04.409 13:06:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:04.409 13:06:08 -- common/autotest_common.sh@887 -- # local i 00:22:04.409 13:06:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:04.409 13:06:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:04.409 13:06:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.668 13:06:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:04.668 [ 00:22:04.668 { 00:22:04.668 "name": "BaseBdev3", 00:22:04.668 "aliases": [ 00:22:04.668 "4172d674-b143-48f7-9d9e-086f0b7a5fd3" 00:22:04.668 ], 00:22:04.668 "product_name": "Malloc disk", 00:22:04.668 "block_size": 512, 00:22:04.668 "num_blocks": 65536, 00:22:04.668 "uuid": "4172d674-b143-48f7-9d9e-086f0b7a5fd3", 00:22:04.668 "assigned_rate_limits": { 00:22:04.668 "rw_ios_per_sec": 0, 00:22:04.668 "rw_mbytes_per_sec": 0, 00:22:04.668 "r_mbytes_per_sec": 0, 00:22:04.668 "w_mbytes_per_sec": 0 00:22:04.668 }, 00:22:04.668 "claimed": true, 00:22:04.668 "claim_type": "exclusive_write", 00:22:04.668 "zoned": false, 00:22:04.668 "supported_io_types": { 00:22:04.668 "read": true, 00:22:04.668 "write": true, 00:22:04.668 "unmap": true, 00:22:04.668 "write_zeroes": true, 00:22:04.668 "flush": true, 00:22:04.668 "reset": true, 00:22:04.668 "compare": false, 00:22:04.668 "compare_and_write": false, 00:22:04.668 "abort": true, 00:22:04.668 "nvme_admin": false, 00:22:04.668 "nvme_io": false 00:22:04.668 }, 00:22:04.668 "memory_domains": [ 00:22:04.668 { 00:22:04.668 "dma_device_id": "system", 00:22:04.668 "dma_device_type": 1 00:22:04.668 }, 00:22:04.668 { 00:22:04.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.668 "dma_device_type": 2 00:22:04.668 } 00:22:04.668 ], 00:22:04.668 "driver_specific": {} 00:22:04.668 } 00:22:04.668 ] 00:22:04.668 13:06:08 -- common/autotest_common.sh@893 -- # return 0 00:22:04.668 13:06:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.669 13:06:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.928 13:06:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.928 "name": "Existed_Raid", 00:22:04.928 "uuid": "d2281bec-2f21-4467-8abf-749a65720539", 00:22:04.928 "strip_size_kb": 0, 00:22:04.928 "state": "configuring", 00:22:04.928 "raid_level": "raid1", 00:22:04.928 "superblock": true, 00:22:04.928 "num_base_bdevs": 4, 00:22:04.928 "num_base_bdevs_discovered": 3, 00:22:04.928 "num_base_bdevs_operational": 4, 00:22:04.928 "base_bdevs_list": [ 00:22:04.928 { 00:22:04.928 "name": "BaseBdev1", 00:22:04.928 "uuid": "4000d1cd-8429-406c-a92c-bc0309a88a7d", 00:22:04.928 "is_configured": true, 00:22:04.928 "data_offset": 2048, 00:22:04.928 "data_size": 63488 00:22:04.928 }, 00:22:04.928 { 00:22:04.928 "name": "BaseBdev2", 00:22:04.928 "uuid": "47a79fe0-74ab-4873-8543-5de32460cabc", 00:22:04.928 "is_configured": true, 00:22:04.928 "data_offset": 2048, 00:22:04.928 "data_size": 63488 00:22:04.928 }, 00:22:04.928 { 00:22:04.928 "name": "BaseBdev3", 00:22:04.928 "uuid": "4172d674-b143-48f7-9d9e-086f0b7a5fd3", 00:22:04.928 "is_configured": true, 00:22:04.928 "data_offset": 2048, 00:22:04.928 "data_size": 63488 00:22:04.928 }, 00:22:04.928 { 00:22:04.928 "name": "BaseBdev4", 00:22:04.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.928 "is_configured": false, 00:22:04.928 "data_offset": 0, 00:22:04.928 "data_size": 0 00:22:04.928 } 00:22:04.928 ] 00:22:04.928 }' 00:22:04.928 13:06:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.928 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:22:05.863 13:06:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:05.863 [2024-04-17 13:06:09.998859] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:05.863 [2024-04-17 13:06:09.999406] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:22:05.863 [2024-04-17 13:06:09.999550] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:05.863 BaseBdev4 00:22:05.863 [2024-04-17 13:06:09.999749] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:05.863 [2024-04-17 13:06:10.000168] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:22:05.863 [2024-04-17 13:06:10.000329] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:22:05.863 [2024-04-17 13:06:10.000601] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.122 13:06:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:06.122 13:06:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:22:06.122 13:06:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:06.122 13:06:10 -- common/autotest_common.sh@887 -- # local i 00:22:06.122 13:06:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:06.122 13:06:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:06.122 13:06:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.380 13:06:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:06.380 [ 00:22:06.380 { 00:22:06.380 "name": "BaseBdev4", 00:22:06.380 "aliases": [ 00:22:06.380 "ab5f8b8d-9cb5-4f84-abf8-f3ef8fe4b828" 00:22:06.380 ], 00:22:06.380 "product_name": "Malloc disk", 00:22:06.380 "block_size": 512, 00:22:06.380 "num_blocks": 65536, 00:22:06.380 "uuid": "ab5f8b8d-9cb5-4f84-abf8-f3ef8fe4b828", 00:22:06.380 "assigned_rate_limits": { 00:22:06.380 "rw_ios_per_sec": 0, 00:22:06.380 "rw_mbytes_per_sec": 0, 00:22:06.380 "r_mbytes_per_sec": 0, 00:22:06.380 "w_mbytes_per_sec": 0 00:22:06.380 }, 00:22:06.380 "claimed": true, 00:22:06.380 "claim_type": "exclusive_write", 00:22:06.380 "zoned": false, 00:22:06.380 "supported_io_types": { 00:22:06.380 "read": true, 00:22:06.380 "write": true, 00:22:06.380 "unmap": true, 00:22:06.380 "write_zeroes": true, 00:22:06.380 "flush": true, 00:22:06.380 "reset": true, 00:22:06.380 "compare": false, 00:22:06.380 "compare_and_write": false, 00:22:06.380 "abort": true, 00:22:06.380 "nvme_admin": false, 00:22:06.380 "nvme_io": false 00:22:06.380 }, 00:22:06.380 "memory_domains": [ 00:22:06.380 { 00:22:06.380 "dma_device_id": "system", 00:22:06.380 "dma_device_type": 1 00:22:06.380 }, 00:22:06.380 { 00:22:06.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.380 "dma_device_type": 2 00:22:06.380 } 00:22:06.380 ], 00:22:06.380 "driver_specific": {} 00:22:06.380 } 00:22:06.380 ] 00:22:06.380 13:06:10 -- common/autotest_common.sh@893 -- # return 0 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.380 13:06:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.381 13:06:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.381 13:06:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.381 13:06:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.381 13:06:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.639 13:06:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.639 "name": "Existed_Raid", 00:22:06.639 "uuid": "d2281bec-2f21-4467-8abf-749a65720539", 00:22:06.639 "strip_size_kb": 0, 00:22:06.639 "state": "online", 00:22:06.639 "raid_level": "raid1", 00:22:06.639 "superblock": true, 00:22:06.639 "num_base_bdevs": 4, 00:22:06.639 "num_base_bdevs_discovered": 4, 00:22:06.639 "num_base_bdevs_operational": 4, 00:22:06.639 "base_bdevs_list": [ 00:22:06.639 { 00:22:06.639 "name": "BaseBdev1", 00:22:06.639 "uuid": "4000d1cd-8429-406c-a92c-bc0309a88a7d", 00:22:06.639 "is_configured": true, 00:22:06.639 "data_offset": 2048, 00:22:06.639 "data_size": 63488 00:22:06.639 }, 00:22:06.639 { 00:22:06.639 "name": "BaseBdev2", 00:22:06.639 "uuid": "47a79fe0-74ab-4873-8543-5de32460cabc", 00:22:06.639 "is_configured": true, 00:22:06.639 "data_offset": 2048, 00:22:06.639 "data_size": 63488 00:22:06.639 }, 00:22:06.639 { 00:22:06.639 "name": "BaseBdev3", 00:22:06.639 "uuid": "4172d674-b143-48f7-9d9e-086f0b7a5fd3", 00:22:06.639 "is_configured": true, 00:22:06.639 "data_offset": 2048, 00:22:06.639 "data_size": 63488 00:22:06.639 }, 00:22:06.639 { 00:22:06.639 "name": "BaseBdev4", 00:22:06.639 "uuid": "ab5f8b8d-9cb5-4f84-abf8-f3ef8fe4b828", 00:22:06.639 "is_configured": true, 00:22:06.639 "data_offset": 2048, 00:22:06.639 "data_size": 63488 00:22:06.639 } 00:22:06.639 ] 00:22:06.639 }' 00:22:06.639 13:06:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.639 13:06:10 -- common/autotest_common.sh@10 -- # set +x 00:22:07.575 13:06:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:07.575 [2024-04-17 13:06:11.667330] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.833 13:06:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.092 13:06:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.092 "name": "Existed_Raid", 00:22:08.092 "uuid": "d2281bec-2f21-4467-8abf-749a65720539", 00:22:08.092 "strip_size_kb": 0, 00:22:08.092 "state": "online", 00:22:08.092 "raid_level": "raid1", 00:22:08.092 "superblock": true, 00:22:08.092 "num_base_bdevs": 4, 00:22:08.092 "num_base_bdevs_discovered": 3, 00:22:08.092 "num_base_bdevs_operational": 3, 00:22:08.092 "base_bdevs_list": [ 00:22:08.092 { 00:22:08.092 "name": null, 00:22:08.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.092 "is_configured": false, 00:22:08.092 "data_offset": 2048, 00:22:08.092 "data_size": 63488 00:22:08.092 }, 00:22:08.092 { 00:22:08.092 "name": "BaseBdev2", 00:22:08.092 "uuid": "47a79fe0-74ab-4873-8543-5de32460cabc", 00:22:08.092 "is_configured": true, 00:22:08.092 "data_offset": 2048, 00:22:08.092 "data_size": 63488 00:22:08.092 }, 00:22:08.092 { 00:22:08.092 "name": "BaseBdev3", 00:22:08.092 "uuid": "4172d674-b143-48f7-9d9e-086f0b7a5fd3", 00:22:08.092 "is_configured": true, 00:22:08.092 "data_offset": 2048, 00:22:08.092 "data_size": 63488 00:22:08.092 }, 00:22:08.092 { 00:22:08.092 "name": "BaseBdev4", 00:22:08.092 "uuid": "ab5f8b8d-9cb5-4f84-abf8-f3ef8fe4b828", 00:22:08.092 "is_configured": true, 00:22:08.092 "data_offset": 2048, 00:22:08.092 "data_size": 63488 00:22:08.092 } 00:22:08.092 ] 00:22:08.092 }' 00:22:08.092 13:06:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.092 13:06:12 -- common/autotest_common.sh@10 -- # set +x 00:22:08.658 13:06:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:08.658 13:06:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:08.658 13:06:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.658 13:06:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:08.918 13:06:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:08.918 13:06:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.918 13:06:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:09.177 [2024-04-17 13:06:13.249154] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:09.436 13:06:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:09.695 [2024-04-17 13:06:13.817838] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:09.953 13:06:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:09.953 13:06:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:09.953 13:06:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.953 13:06:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:10.213 13:06:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:10.213 13:06:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.213 13:06:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:10.504 [2024-04-17 13:06:14.400298] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:10.504 [2024-04-17 13:06:14.400657] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.504 [2024-04-17 13:06:14.488420] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.504 [2024-04-17 13:06:14.488828] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.504 [2024-04-17 13:06:14.488988] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:22:10.504 13:06:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:10.504 13:06:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:10.504 13:06:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.504 13:06:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:10.764 13:06:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:10.764 13:06:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:10.764 13:06:14 -- bdev/bdev_raid.sh@287 -- # killprocess 129313 00:22:10.764 13:06:14 -- common/autotest_common.sh@924 -- # '[' -z 129313 ']' 00:22:10.764 13:06:14 -- common/autotest_common.sh@928 -- # kill -0 129313 00:22:10.764 13:06:14 -- common/autotest_common.sh@929 -- # uname 00:22:10.764 13:06:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:10.764 13:06:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 129313 00:22:10.764 killing process with pid 129313 00:22:10.764 13:06:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:10.764 13:06:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:10.764 13:06:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 129313' 00:22:10.764 13:06:14 -- common/autotest_common.sh@943 -- # kill 129313 00:22:10.764 13:06:14 -- common/autotest_common.sh@948 -- # wait 129313 00:22:10.764 [2024-04-17 13:06:14.791699] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:10.764 [2024-04-17 13:06:14.791832] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.142 ************************************ 00:22:12.142 END TEST raid_state_function_test_sb 00:22:12.142 ************************************ 00:22:12.142 13:06:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:12.142 00:22:12.143 real 0m16.373s 00:22:12.143 user 0m29.443s 00:22:12.143 sys 0m1.733s 00:22:12.143 13:06:15 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:22:12.143 13:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:12.143 13:06:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:12.143 13:06:15 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:22:12.143 13:06:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:12.143 13:06:15 -- common/autotest_common.sh@10 -- # set +x 00:22:12.143 ************************************ 00:22:12.143 START TEST raid_superblock_test 00:22:12.143 ************************************ 00:22:12.143 13:06:16 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid1 4 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@357 -- # raid_pid=129816 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:12.143 13:06:16 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129816 /var/tmp/spdk-raid.sock 00:22:12.143 13:06:16 -- common/autotest_common.sh@817 -- # '[' -z 129816 ']' 00:22:12.143 13:06:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:12.143 13:06:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:12.143 13:06:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:12.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:12.143 13:06:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:12.143 13:06:16 -- common/autotest_common.sh@10 -- # set +x 00:22:12.143 [2024-04-17 13:06:16.090027] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:22:12.143 [2024-04-17 13:06:16.090394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129816 ] 00:22:12.143 [2024-04-17 13:06:16.256455] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.400 [2024-04-17 13:06:16.443405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.658 [2024-04-17 13:06:16.627692] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.226 13:06:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:13.226 13:06:17 -- common/autotest_common.sh@850 -- # return 0 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:13.226 malloc1 00:22:13.226 13:06:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.485 [2024-04-17 13:06:17.575438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.485 [2024-04-17 13:06:17.575820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.485 [2024-04-17 13:06:17.575979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:13.485 [2024-04-17 13:06:17.576160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.485 [2024-04-17 13:06:17.578798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.485 [2024-04-17 13:06:17.578982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.485 pt1 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.485 13:06:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:13.744 malloc2 00:22:13.744 13:06:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.002 [2024-04-17 13:06:18.069024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.002 [2024-04-17 13:06:18.069424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.002 [2024-04-17 13:06:18.069514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:14.002 [2024-04-17 13:06:18.069744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.002 [2024-04-17 13:06:18.072555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.002 [2024-04-17 13:06:18.072759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.002 pt2 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.002 13:06:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:14.261 malloc3 00:22:14.261 13:06:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:14.581 [2024-04-17 13:06:18.580350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:14.581 [2024-04-17 13:06:18.580632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.581 [2024-04-17 13:06:18.580845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:14.581 [2024-04-17 13:06:18.581045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.581 [2024-04-17 13:06:18.583673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.581 [2024-04-17 13:06:18.583869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:14.581 pt3 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.581 13:06:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:14.839 malloc4 00:22:14.839 13:06:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:15.097 [2024-04-17 13:06:19.099837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:15.097 [2024-04-17 13:06:19.100239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.097 [2024-04-17 13:06:19.100467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:15.097 [2024-04-17 13:06:19.100663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.097 [2024-04-17 13:06:19.103291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.097 [2024-04-17 13:06:19.103494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:15.097 pt4 00:22:15.097 13:06:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:15.097 13:06:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:15.097 13:06:19 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:15.357 [2024-04-17 13:06:19.360007] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:15.357 [2024-04-17 13:06:19.362432] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.357 [2024-04-17 13:06:19.362685] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:15.357 [2024-04-17 13:06:19.362940] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:15.357 [2024-04-17 13:06:19.363368] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:15.357 [2024-04-17 13:06:19.363509] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:15.357 [2024-04-17 13:06:19.363744] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:15.357 [2024-04-17 13:06:19.364377] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:15.357 [2024-04-17 13:06:19.364550] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:15.357 [2024-04-17 13:06:19.364954] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.357 13:06:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.615 13:06:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.615 "name": "raid_bdev1", 00:22:15.615 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:15.615 "strip_size_kb": 0, 00:22:15.615 "state": "online", 00:22:15.615 "raid_level": "raid1", 00:22:15.615 "superblock": true, 00:22:15.615 "num_base_bdevs": 4, 00:22:15.615 "num_base_bdevs_discovered": 4, 00:22:15.615 "num_base_bdevs_operational": 4, 00:22:15.615 "base_bdevs_list": [ 00:22:15.615 { 00:22:15.615 "name": "pt1", 00:22:15.615 "uuid": "d8a524be-45ff-599a-bd8e-669992aaa039", 00:22:15.615 "is_configured": true, 00:22:15.616 "data_offset": 2048, 00:22:15.616 "data_size": 63488 00:22:15.616 }, 00:22:15.616 { 00:22:15.616 "name": "pt2", 00:22:15.616 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:15.616 "is_configured": true, 00:22:15.616 "data_offset": 2048, 00:22:15.616 "data_size": 63488 00:22:15.616 }, 00:22:15.616 { 00:22:15.616 "name": "pt3", 00:22:15.616 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:15.616 "is_configured": true, 00:22:15.616 "data_offset": 2048, 00:22:15.616 "data_size": 63488 00:22:15.616 }, 00:22:15.616 { 00:22:15.616 "name": "pt4", 00:22:15.616 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:15.616 "is_configured": true, 00:22:15.616 "data_offset": 2048, 00:22:15.616 "data_size": 63488 00:22:15.616 } 00:22:15.616 ] 00:22:15.616 }' 00:22:15.616 13:06:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.616 13:06:19 -- common/autotest_common.sh@10 -- # set +x 00:22:16.183 13:06:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:16.183 13:06:20 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:16.441 [2024-04-17 13:06:20.505571] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.441 13:06:20 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=dbf9754a-f43c-49ad-b536-7a8d36387041 00:22:16.441 13:06:20 -- bdev/bdev_raid.sh@380 -- # '[' -z dbf9754a-f43c-49ad-b536-7a8d36387041 ']' 00:22:16.442 13:06:20 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:16.699 [2024-04-17 13:06:20.769334] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.700 [2024-04-17 13:06:20.769594] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.700 [2024-04-17 13:06:20.769826] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.700 [2024-04-17 13:06:20.770070] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.700 [2024-04-17 13:06:20.770187] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:16.700 13:06:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.700 13:06:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:16.958 13:06:21 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:16.958 13:06:21 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:16.958 13:06:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:16.958 13:06:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:17.215 13:06:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.215 13:06:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:17.473 13:06:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.473 13:06:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:17.732 13:06:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:17.732 13:06:21 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:17.990 13:06:22 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:17.990 13:06:22 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:18.249 13:06:22 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:18.249 13:06:22 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:18.249 13:06:22 -- common/autotest_common.sh@638 -- # local es=0 00:22:18.249 13:06:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:18.249 13:06:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.249 13:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.249 13:06:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.249 13:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.249 13:06:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.249 13:06:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:18.249 13:06:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:18.249 13:06:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:18.249 13:06:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:18.508 [2024-04-17 13:06:22.476453] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:18.508 [2024-04-17 13:06:22.478463] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:18.508 [2024-04-17 13:06:22.478547] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:18.508 [2024-04-17 13:06:22.478591] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:18.508 [2024-04-17 13:06:22.478642] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:18.508 [2024-04-17 13:06:22.478775] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:18.508 [2024-04-17 13:06:22.478815] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:18.508 [2024-04-17 13:06:22.478886] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:18.508 [2024-04-17 13:06:22.478942] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.508 [2024-04-17 13:06:22.478954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:22:18.508 request: 00:22:18.508 { 00:22:18.508 "name": "raid_bdev1", 00:22:18.508 "raid_level": "raid1", 00:22:18.508 "base_bdevs": [ 00:22:18.508 "malloc1", 00:22:18.508 "malloc2", 00:22:18.508 "malloc3", 00:22:18.508 "malloc4" 00:22:18.508 ], 00:22:18.508 "superblock": false, 00:22:18.508 "method": "bdev_raid_create", 00:22:18.508 "req_id": 1 00:22:18.508 } 00:22:18.508 Got JSON-RPC error response 00:22:18.508 response: 00:22:18.508 { 00:22:18.508 "code": -17, 00:22:18.508 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:18.508 } 00:22:18.508 13:06:22 -- common/autotest_common.sh@641 -- # es=1 00:22:18.508 13:06:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:18.508 13:06:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:18.508 13:06:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:18.508 13:06:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:18.508 13:06:22 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.767 13:06:22 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:18.767 13:06:22 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:18.767 13:06:22 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:19.026 [2024-04-17 13:06:22.960511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:19.026 [2024-04-17 13:06:22.960683] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.026 [2024-04-17 13:06:22.960720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:19.026 [2024-04-17 13:06:22.960750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.026 [2024-04-17 13:06:22.963151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.026 [2024-04-17 13:06:22.963239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:19.026 [2024-04-17 13:06:22.963420] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:19.026 [2024-04-17 13:06:22.963529] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:19.026 pt1 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:19.026 13:06:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.027 13:06:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.285 13:06:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.285 "name": "raid_bdev1", 00:22:19.285 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:19.285 "strip_size_kb": 0, 00:22:19.285 "state": "configuring", 00:22:19.285 "raid_level": "raid1", 00:22:19.285 "superblock": true, 00:22:19.285 "num_base_bdevs": 4, 00:22:19.285 "num_base_bdevs_discovered": 1, 00:22:19.285 "num_base_bdevs_operational": 4, 00:22:19.285 "base_bdevs_list": [ 00:22:19.285 { 00:22:19.285 "name": "pt1", 00:22:19.285 "uuid": "d8a524be-45ff-599a-bd8e-669992aaa039", 00:22:19.285 "is_configured": true, 00:22:19.285 "data_offset": 2048, 00:22:19.285 "data_size": 63488 00:22:19.285 }, 00:22:19.285 { 00:22:19.285 "name": null, 00:22:19.285 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:19.285 "is_configured": false, 00:22:19.285 "data_offset": 2048, 00:22:19.285 "data_size": 63488 00:22:19.285 }, 00:22:19.285 { 00:22:19.285 "name": null, 00:22:19.285 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:19.285 "is_configured": false, 00:22:19.285 "data_offset": 2048, 00:22:19.285 "data_size": 63488 00:22:19.285 }, 00:22:19.285 { 00:22:19.285 "name": null, 00:22:19.285 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:19.285 "is_configured": false, 00:22:19.285 "data_offset": 2048, 00:22:19.285 "data_size": 63488 00:22:19.285 } 00:22:19.285 ] 00:22:19.285 }' 00:22:19.285 13:06:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.285 13:06:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.852 13:06:23 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:19.852 13:06:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.110 [2024-04-17 13:06:24.148855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.110 [2024-04-17 13:06:24.148986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.110 [2024-04-17 13:06:24.149032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:20.110 [2024-04-17 13:06:24.149055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.110 [2024-04-17 13:06:24.149553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.110 [2024-04-17 13:06:24.149600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.110 [2024-04-17 13:06:24.149706] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:20.110 [2024-04-17 13:06:24.149743] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.110 pt2 00:22:20.110 13:06:24 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:20.371 [2024-04-17 13:06:24.376960] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.371 13:06:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.630 13:06:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.630 "name": "raid_bdev1", 00:22:20.630 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:20.630 "strip_size_kb": 0, 00:22:20.630 "state": "configuring", 00:22:20.630 "raid_level": "raid1", 00:22:20.630 "superblock": true, 00:22:20.630 "num_base_bdevs": 4, 00:22:20.630 "num_base_bdevs_discovered": 1, 00:22:20.630 "num_base_bdevs_operational": 4, 00:22:20.630 "base_bdevs_list": [ 00:22:20.630 { 00:22:20.630 "name": "pt1", 00:22:20.630 "uuid": "d8a524be-45ff-599a-bd8e-669992aaa039", 00:22:20.630 "is_configured": true, 00:22:20.630 "data_offset": 2048, 00:22:20.630 "data_size": 63488 00:22:20.630 }, 00:22:20.630 { 00:22:20.630 "name": null, 00:22:20.630 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:20.630 "is_configured": false, 00:22:20.630 "data_offset": 2048, 00:22:20.630 "data_size": 63488 00:22:20.630 }, 00:22:20.630 { 00:22:20.630 "name": null, 00:22:20.630 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:20.630 "is_configured": false, 00:22:20.630 "data_offset": 2048, 00:22:20.630 "data_size": 63488 00:22:20.630 }, 00:22:20.630 { 00:22:20.630 "name": null, 00:22:20.630 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:20.630 "is_configured": false, 00:22:20.630 "data_offset": 2048, 00:22:20.630 "data_size": 63488 00:22:20.630 } 00:22:20.630 ] 00:22:20.630 }' 00:22:20.630 13:06:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.630 13:06:24 -- common/autotest_common.sh@10 -- # set +x 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:21.566 [2024-04-17 13:06:25.630477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:21.566 [2024-04-17 13:06:25.630624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.566 [2024-04-17 13:06:25.630668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:21.566 [2024-04-17 13:06:25.630694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.566 [2024-04-17 13:06:25.631235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.566 [2024-04-17 13:06:25.631320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:21.566 [2024-04-17 13:06:25.631428] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:21.566 [2024-04-17 13:06:25.631458] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:21.566 pt2 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:21.566 13:06:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:21.825 [2024-04-17 13:06:25.878523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:21.825 [2024-04-17 13:06:25.878657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.825 [2024-04-17 13:06:25.878704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:21.825 [2024-04-17 13:06:25.878734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.825 [2024-04-17 13:06:25.879276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.825 [2024-04-17 13:06:25.879365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:21.825 [2024-04-17 13:06:25.879478] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:21.825 [2024-04-17 13:06:25.879507] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:21.825 pt3 00:22:21.825 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:21.825 13:06:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:21.825 13:06:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:22.096 [2024-04-17 13:06:26.086606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:22.096 [2024-04-17 13:06:26.086751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.096 [2024-04-17 13:06:26.086794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:22.096 [2024-04-17 13:06:26.086822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.096 [2024-04-17 13:06:26.087466] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.096 [2024-04-17 13:06:26.087543] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:22.096 [2024-04-17 13:06:26.087676] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:22.096 [2024-04-17 13:06:26.087723] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:22.096 [2024-04-17 13:06:26.087973] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:22.096 [2024-04-17 13:06:26.087998] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:22.096 [2024-04-17 13:06:26.088117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:22.096 [2024-04-17 13:06:26.088495] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:22.097 [2024-04-17 13:06:26.088521] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:22.097 [2024-04-17 13:06:26.088670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.097 pt4 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.097 13:06:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.364 13:06:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.364 "name": "raid_bdev1", 00:22:22.364 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:22.364 "strip_size_kb": 0, 00:22:22.364 "state": "online", 00:22:22.364 "raid_level": "raid1", 00:22:22.364 "superblock": true, 00:22:22.364 "num_base_bdevs": 4, 00:22:22.364 "num_base_bdevs_discovered": 4, 00:22:22.364 "num_base_bdevs_operational": 4, 00:22:22.364 "base_bdevs_list": [ 00:22:22.364 { 00:22:22.364 "name": "pt1", 00:22:22.364 "uuid": "d8a524be-45ff-599a-bd8e-669992aaa039", 00:22:22.364 "is_configured": true, 00:22:22.364 "data_offset": 2048, 00:22:22.364 "data_size": 63488 00:22:22.364 }, 00:22:22.364 { 00:22:22.364 "name": "pt2", 00:22:22.364 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:22.364 "is_configured": true, 00:22:22.364 "data_offset": 2048, 00:22:22.364 "data_size": 63488 00:22:22.364 }, 00:22:22.364 { 00:22:22.364 "name": "pt3", 00:22:22.364 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:22.364 "is_configured": true, 00:22:22.364 "data_offset": 2048, 00:22:22.364 "data_size": 63488 00:22:22.364 }, 00:22:22.364 { 00:22:22.364 "name": "pt4", 00:22:22.364 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:22.364 "is_configured": true, 00:22:22.364 "data_offset": 2048, 00:22:22.364 "data_size": 63488 00:22:22.364 } 00:22:22.364 ] 00:22:22.364 }' 00:22:22.364 13:06:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.364 13:06:26 -- common/autotest_common.sh@10 -- # set +x 00:22:22.931 13:06:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:22.931 13:06:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:23.190 [2024-04-17 13:06:27.299173] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.190 13:06:27 -- bdev/bdev_raid.sh@430 -- # '[' dbf9754a-f43c-49ad-b536-7a8d36387041 '!=' dbf9754a-f43c-49ad-b536-7a8d36387041 ']' 00:22:23.190 13:06:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:22:23.190 13:06:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:23.190 13:06:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:23.190 13:06:27 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:23.449 [2024-04-17 13:06:27.563038] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.449 13:06:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.708 13:06:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.708 "name": "raid_bdev1", 00:22:23.708 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:23.708 "strip_size_kb": 0, 00:22:23.708 "state": "online", 00:22:23.708 "raid_level": "raid1", 00:22:23.708 "superblock": true, 00:22:23.708 "num_base_bdevs": 4, 00:22:23.708 "num_base_bdevs_discovered": 3, 00:22:23.708 "num_base_bdevs_operational": 3, 00:22:23.708 "base_bdevs_list": [ 00:22:23.708 { 00:22:23.708 "name": null, 00:22:23.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.708 "is_configured": false, 00:22:23.708 "data_offset": 2048, 00:22:23.708 "data_size": 63488 00:22:23.708 }, 00:22:23.708 { 00:22:23.708 "name": "pt2", 00:22:23.708 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:23.708 "is_configured": true, 00:22:23.708 "data_offset": 2048, 00:22:23.708 "data_size": 63488 00:22:23.708 }, 00:22:23.708 { 00:22:23.708 "name": "pt3", 00:22:23.708 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:23.708 "is_configured": true, 00:22:23.708 "data_offset": 2048, 00:22:23.708 "data_size": 63488 00:22:23.708 }, 00:22:23.708 { 00:22:23.708 "name": "pt4", 00:22:23.708 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:23.708 "is_configured": true, 00:22:23.708 "data_offset": 2048, 00:22:23.708 "data_size": 63488 00:22:23.708 } 00:22:23.708 ] 00:22:23.708 }' 00:22:23.708 13:06:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.708 13:06:27 -- common/autotest_common.sh@10 -- # set +x 00:22:24.645 13:06:28 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:24.645 [2024-04-17 13:06:28.655223] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.645 [2024-04-17 13:06:28.655270] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.645 [2024-04-17 13:06:28.655402] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.645 [2024-04-17 13:06:28.655499] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.645 [2024-04-17 13:06:28.655518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:24.645 13:06:28 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.645 13:06:28 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:24.905 13:06:28 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:24.905 13:06:28 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:24.905 13:06:28 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:24.905 13:06:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:24.905 13:06:28 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:25.162 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:25.162 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:25.162 13:06:29 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:25.421 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:25.421 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:25.421 13:06:29 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:25.681 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:25.681 13:06:29 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:25.681 13:06:29 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:25.681 13:06:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:25.681 13:06:29 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:25.942 [2024-04-17 13:06:29.852312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:25.942 [2024-04-17 13:06:29.852469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.942 [2024-04-17 13:06:29.852515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:25.942 [2024-04-17 13:06:29.852572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.942 [2024-04-17 13:06:29.855789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.942 [2024-04-17 13:06:29.855922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:25.942 [2024-04-17 13:06:29.856100] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:25.942 [2024-04-17 13:06:29.856180] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:25.942 pt2 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.942 13:06:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.202 13:06:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.202 "name": "raid_bdev1", 00:22:26.202 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:26.202 "strip_size_kb": 0, 00:22:26.202 "state": "configuring", 00:22:26.202 "raid_level": "raid1", 00:22:26.202 "superblock": true, 00:22:26.202 "num_base_bdevs": 4, 00:22:26.202 "num_base_bdevs_discovered": 1, 00:22:26.202 "num_base_bdevs_operational": 3, 00:22:26.202 "base_bdevs_list": [ 00:22:26.202 { 00:22:26.202 "name": null, 00:22:26.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.202 "is_configured": false, 00:22:26.202 "data_offset": 2048, 00:22:26.202 "data_size": 63488 00:22:26.202 }, 00:22:26.202 { 00:22:26.202 "name": "pt2", 00:22:26.202 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:26.202 "is_configured": true, 00:22:26.202 "data_offset": 2048, 00:22:26.202 "data_size": 63488 00:22:26.202 }, 00:22:26.202 { 00:22:26.202 "name": null, 00:22:26.202 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:26.202 "is_configured": false, 00:22:26.202 "data_offset": 2048, 00:22:26.202 "data_size": 63488 00:22:26.202 }, 00:22:26.202 { 00:22:26.202 "name": null, 00:22:26.202 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:26.202 "is_configured": false, 00:22:26.202 "data_offset": 2048, 00:22:26.202 "data_size": 63488 00:22:26.202 } 00:22:26.202 ] 00:22:26.202 }' 00:22:26.202 13:06:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.202 13:06:30 -- common/autotest_common.sh@10 -- # set +x 00:22:26.770 13:06:30 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:26.770 13:06:30 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:26.770 13:06:30 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:27.029 [2024-04-17 13:06:31.000712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:27.029 [2024-04-17 13:06:31.000852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.029 [2024-04-17 13:06:31.000897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:27.029 [2024-04-17 13:06:31.000928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.029 [2024-04-17 13:06:31.001500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.029 [2024-04-17 13:06:31.001563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:27.029 [2024-04-17 13:06:31.001680] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:27.029 [2024-04-17 13:06:31.001711] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:27.029 pt3 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.029 13:06:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.289 13:06:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:27.289 "name": "raid_bdev1", 00:22:27.289 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:27.289 "strip_size_kb": 0, 00:22:27.289 "state": "configuring", 00:22:27.289 "raid_level": "raid1", 00:22:27.289 "superblock": true, 00:22:27.289 "num_base_bdevs": 4, 00:22:27.289 "num_base_bdevs_discovered": 2, 00:22:27.289 "num_base_bdevs_operational": 3, 00:22:27.289 "base_bdevs_list": [ 00:22:27.289 { 00:22:27.289 "name": null, 00:22:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.289 "is_configured": false, 00:22:27.289 "data_offset": 2048, 00:22:27.289 "data_size": 63488 00:22:27.289 }, 00:22:27.289 { 00:22:27.289 "name": "pt2", 00:22:27.289 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:27.289 "is_configured": true, 00:22:27.289 "data_offset": 2048, 00:22:27.289 "data_size": 63488 00:22:27.289 }, 00:22:27.289 { 00:22:27.289 "name": "pt3", 00:22:27.289 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:27.289 "is_configured": true, 00:22:27.289 "data_offset": 2048, 00:22:27.289 "data_size": 63488 00:22:27.289 }, 00:22:27.289 { 00:22:27.289 "name": null, 00:22:27.289 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:27.289 "is_configured": false, 00:22:27.289 "data_offset": 2048, 00:22:27.289 "data_size": 63488 00:22:27.289 } 00:22:27.289 ] 00:22:27.289 }' 00:22:27.289 13:06:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:27.289 13:06:31 -- common/autotest_common.sh@10 -- # set +x 00:22:27.857 13:06:31 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:27.857 13:06:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:27.857 13:06:31 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:27.857 13:06:31 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:28.116 [2024-04-17 13:06:32.241061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:28.116 [2024-04-17 13:06:32.241193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.116 [2024-04-17 13:06:32.241237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:28.116 [2024-04-17 13:06:32.241259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.116 [2024-04-17 13:06:32.241811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.116 [2024-04-17 13:06:32.241865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:28.116 [2024-04-17 13:06:32.241982] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:28.116 [2024-04-17 13:06:32.242012] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:28.116 [2024-04-17 13:06:32.242168] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:22:28.116 [2024-04-17 13:06:32.242191] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:28.116 [2024-04-17 13:06:32.242337] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:28.116 [2024-04-17 13:06:32.242728] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:22:28.116 [2024-04-17 13:06:32.242752] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:22:28.116 [2024-04-17 13:06:32.242927] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.116 pt4 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.116 13:06:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.375 13:06:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.375 "name": "raid_bdev1", 00:22:28.375 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:28.375 "strip_size_kb": 0, 00:22:28.375 "state": "online", 00:22:28.375 "raid_level": "raid1", 00:22:28.375 "superblock": true, 00:22:28.375 "num_base_bdevs": 4, 00:22:28.375 "num_base_bdevs_discovered": 3, 00:22:28.375 "num_base_bdevs_operational": 3, 00:22:28.375 "base_bdevs_list": [ 00:22:28.375 { 00:22:28.375 "name": null, 00:22:28.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.375 "is_configured": false, 00:22:28.375 "data_offset": 2048, 00:22:28.375 "data_size": 63488 00:22:28.375 }, 00:22:28.375 { 00:22:28.375 "name": "pt2", 00:22:28.375 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:28.375 "is_configured": true, 00:22:28.375 "data_offset": 2048, 00:22:28.375 "data_size": 63488 00:22:28.375 }, 00:22:28.375 { 00:22:28.375 "name": "pt3", 00:22:28.375 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:28.375 "is_configured": true, 00:22:28.375 "data_offset": 2048, 00:22:28.375 "data_size": 63488 00:22:28.375 }, 00:22:28.375 { 00:22:28.375 "name": "pt4", 00:22:28.375 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:28.375 "is_configured": true, 00:22:28.375 "data_offset": 2048, 00:22:28.375 "data_size": 63488 00:22:28.375 } 00:22:28.375 ] 00:22:28.375 }' 00:22:28.375 13:06:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.375 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:22:29.333 13:06:33 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:29.333 13:06:33 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.333 [2024-04-17 13:06:33.337249] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.333 [2024-04-17 13:06:33.337292] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.333 [2024-04-17 13:06:33.337388] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.333 [2024-04-17 13:06:33.337466] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.333 [2024-04-17 13:06:33.337477] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:22:29.333 13:06:33 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.333 13:06:33 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:29.592 13:06:33 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:29.592 13:06:33 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:29.592 13:06:33 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:29.851 [2024-04-17 13:06:33.861369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:29.851 [2024-04-17 13:06:33.861497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.851 [2024-04-17 13:06:33.861539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:29.851 [2024-04-17 13:06:33.861562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.851 [2024-04-17 13:06:33.864108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.851 [2024-04-17 13:06:33.864217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:29.851 [2024-04-17 13:06:33.864344] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:29.851 [2024-04-17 13:06:33.864427] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:29.851 pt1 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.851 13:06:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.110 13:06:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.110 "name": "raid_bdev1", 00:22:30.110 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:30.110 "strip_size_kb": 0, 00:22:30.110 "state": "configuring", 00:22:30.110 "raid_level": "raid1", 00:22:30.110 "superblock": true, 00:22:30.110 "num_base_bdevs": 4, 00:22:30.110 "num_base_bdevs_discovered": 1, 00:22:30.110 "num_base_bdevs_operational": 4, 00:22:30.110 "base_bdevs_list": [ 00:22:30.110 { 00:22:30.110 "name": "pt1", 00:22:30.110 "uuid": "d8a524be-45ff-599a-bd8e-669992aaa039", 00:22:30.110 "is_configured": true, 00:22:30.110 "data_offset": 2048, 00:22:30.110 "data_size": 63488 00:22:30.110 }, 00:22:30.110 { 00:22:30.110 "name": null, 00:22:30.110 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:30.110 "is_configured": false, 00:22:30.110 "data_offset": 2048, 00:22:30.110 "data_size": 63488 00:22:30.110 }, 00:22:30.110 { 00:22:30.110 "name": null, 00:22:30.110 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:30.110 "is_configured": false, 00:22:30.110 "data_offset": 2048, 00:22:30.110 "data_size": 63488 00:22:30.110 }, 00:22:30.110 { 00:22:30.110 "name": null, 00:22:30.110 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:30.110 "is_configured": false, 00:22:30.110 "data_offset": 2048, 00:22:30.110 "data_size": 63488 00:22:30.110 } 00:22:30.110 ] 00:22:30.110 }' 00:22:30.110 13:06:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.110 13:06:34 -- common/autotest_common.sh@10 -- # set +x 00:22:30.677 13:06:34 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:30.677 13:06:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:30.677 13:06:34 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:30.936 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:30.936 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:30.936 13:06:35 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:31.195 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:31.195 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:31.195 13:06:35 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:31.454 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:31.454 13:06:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:31.454 13:06:35 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:31.454 13:06:35 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:31.713 [2024-04-17 13:06:35.721873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:31.713 [2024-04-17 13:06:35.721972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.713 [2024-04-17 13:06:35.722010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:22:31.713 [2024-04-17 13:06:35.722039] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.713 [2024-04-17 13:06:35.722594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.713 [2024-04-17 13:06:35.722646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:31.713 [2024-04-17 13:06:35.722764] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:31.713 [2024-04-17 13:06:35.722780] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:31.713 [2024-04-17 13:06:35.722787] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.713 [2024-04-17 13:06:35.722808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:22:31.713 [2024-04-17 13:06:35.722880] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:31.713 pt4 00:22:31.713 13:06:35 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:31.713 13:06:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:31.713 13:06:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:31.713 13:06:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:31.713 13:06:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.714 13:06:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.973 13:06:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.973 "name": "raid_bdev1", 00:22:31.973 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:31.973 "strip_size_kb": 0, 00:22:31.973 "state": "configuring", 00:22:31.973 "raid_level": "raid1", 00:22:31.973 "superblock": true, 00:22:31.973 "num_base_bdevs": 4, 00:22:31.973 "num_base_bdevs_discovered": 1, 00:22:31.973 "num_base_bdevs_operational": 3, 00:22:31.973 "base_bdevs_list": [ 00:22:31.973 { 00:22:31.973 "name": null, 00:22:31.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.973 "is_configured": false, 00:22:31.973 "data_offset": 2048, 00:22:31.973 "data_size": 63488 00:22:31.973 }, 00:22:31.973 { 00:22:31.973 "name": null, 00:22:31.973 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:31.973 "is_configured": false, 00:22:31.973 "data_offset": 2048, 00:22:31.973 "data_size": 63488 00:22:31.973 }, 00:22:31.973 { 00:22:31.973 "name": null, 00:22:31.973 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:31.973 "is_configured": false, 00:22:31.973 "data_offset": 2048, 00:22:31.973 "data_size": 63488 00:22:31.973 }, 00:22:31.973 { 00:22:31.973 "name": "pt4", 00:22:31.973 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:31.973 "is_configured": true, 00:22:31.973 "data_offset": 2048, 00:22:31.973 "data_size": 63488 00:22:31.973 } 00:22:31.973 ] 00:22:31.973 }' 00:22:31.973 13:06:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.973 13:06:35 -- common/autotest_common.sh@10 -- # set +x 00:22:32.541 13:06:36 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:32.541 13:06:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:32.541 13:06:36 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:32.799 [2024-04-17 13:06:36.878118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:32.799 [2024-04-17 13:06:36.878259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.799 [2024-04-17 13:06:36.878299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:22:32.799 [2024-04-17 13:06:36.878327] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.799 [2024-04-17 13:06:36.878886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.799 [2024-04-17 13:06:36.878953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:32.799 [2024-04-17 13:06:36.879060] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:32.799 [2024-04-17 13:06:36.879088] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:32.799 pt2 00:22:32.799 13:06:36 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:32.799 13:06:36 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:32.799 13:06:36 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:33.057 [2024-04-17 13:06:37.142177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:33.057 [2024-04-17 13:06:37.142298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.058 [2024-04-17 13:06:37.142334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:22:33.058 [2024-04-17 13:06:37.142363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.058 [2024-04-17 13:06:37.142886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.058 [2024-04-17 13:06:37.142954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:33.058 [2024-04-17 13:06:37.143071] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:33.058 [2024-04-17 13:06:37.143115] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:33.058 [2024-04-17 13:06:37.143269] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:22:33.058 [2024-04-17 13:06:37.143296] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:33.058 [2024-04-17 13:06:37.143411] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:22:33.058 [2024-04-17 13:06:37.143776] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:22:33.058 [2024-04-17 13:06:37.143800] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:22:33.058 [2024-04-17 13:06:37.143956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.058 pt3 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.058 13:06:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.316 13:06:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.316 "name": "raid_bdev1", 00:22:33.316 "uuid": "dbf9754a-f43c-49ad-b536-7a8d36387041", 00:22:33.316 "strip_size_kb": 0, 00:22:33.316 "state": "online", 00:22:33.316 "raid_level": "raid1", 00:22:33.316 "superblock": true, 00:22:33.316 "num_base_bdevs": 4, 00:22:33.316 "num_base_bdevs_discovered": 3, 00:22:33.316 "num_base_bdevs_operational": 3, 00:22:33.316 "base_bdevs_list": [ 00:22:33.316 { 00:22:33.316 "name": null, 00:22:33.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.316 "is_configured": false, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "pt2", 00:22:33.316 "uuid": "9742f79d-1556-5e27-8b0b-1a1ca51579a1", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "pt3", 00:22:33.316 "uuid": "d865c3b0-8107-5b07-9f35-1047914bd194", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "name": "pt4", 00:22:33.316 "uuid": "d0a28536-8026-5d02-862d-9fcdde57cf52", 00:22:33.316 "is_configured": true, 00:22:33.316 "data_offset": 2048, 00:22:33.316 "data_size": 63488 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }' 00:22:33.316 13:06:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.316 13:06:37 -- common/autotest_common.sh@10 -- # set +x 00:22:33.917 13:06:37 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.917 13:06:37 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:34.176 [2024-04-17 13:06:38.218789] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.176 13:06:38 -- bdev/bdev_raid.sh@506 -- # '[' dbf9754a-f43c-49ad-b536-7a8d36387041 '!=' dbf9754a-f43c-49ad-b536-7a8d36387041 ']' 00:22:34.176 13:06:38 -- bdev/bdev_raid.sh@511 -- # killprocess 129816 00:22:34.176 13:06:38 -- common/autotest_common.sh@924 -- # '[' -z 129816 ']' 00:22:34.176 13:06:38 -- common/autotest_common.sh@928 -- # kill -0 129816 00:22:34.176 13:06:38 -- common/autotest_common.sh@929 -- # uname 00:22:34.176 13:06:38 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:34.176 13:06:38 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 129816 00:22:34.176 killing process with pid 129816 00:22:34.176 13:06:38 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:34.176 13:06:38 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:34.176 13:06:38 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 129816' 00:22:34.176 13:06:38 -- common/autotest_common.sh@943 -- # kill 129816 00:22:34.176 13:06:38 -- common/autotest_common.sh@948 -- # wait 129816 00:22:34.176 [2024-04-17 13:06:38.254777] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:34.176 [2024-04-17 13:06:38.255204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.176 [2024-04-17 13:06:38.255336] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.176 [2024-04-17 13:06:38.255496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:22:34.744 [2024-04-17 13:06:38.642235] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.681 13:06:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:35.681 00:22:35.681 real 0m23.759s 00:22:35.681 user 0m43.849s 00:22:35.681 sys 0m2.569s 00:22:35.681 13:06:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:22:35.681 13:06:39 -- common/autotest_common.sh@10 -- # set +x 00:22:35.681 ************************************ 00:22:35.681 END TEST raid_superblock_test 00:22:35.681 ************************************ 00:22:35.681 13:06:39 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:22:35.681 13:06:39 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:35.681 13:06:39 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:22:35.681 13:06:39 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:22:35.681 13:06:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:35.681 13:06:39 -- common/autotest_common.sh@10 -- # set +x 00:22:35.940 ************************************ 00:22:35.940 START TEST raid_rebuild_test 00:22:35.940 ************************************ 00:22:35.940 13:06:39 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 2 false false 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=130551 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 130551 /var/tmp/spdk-raid.sock 00:22:35.940 13:06:39 -- common/autotest_common.sh@817 -- # '[' -z 130551 ']' 00:22:35.940 13:06:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.940 13:06:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.940 13:06:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.940 13:06:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:35.940 13:06:39 -- common/autotest_common.sh@10 -- # set +x 00:22:35.940 13:06:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:35.940 [2024-04-17 13:06:39.932295] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:22:35.940 [2024-04-17 13:06:39.932715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130551 ] 00:22:35.940 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:35.940 Zero copy mechanism will not be used. 00:22:36.199 [2024-04-17 13:06:40.099663] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.199 [2024-04-17 13:06:40.295861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.458 [2024-04-17 13:06:40.477972] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.717 13:06:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:36.717 13:06:40 -- common/autotest_common.sh@850 -- # return 0 00:22:36.717 13:06:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:36.717 13:06:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:36.717 13:06:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:37.283 BaseBdev1 00:22:37.283 13:06:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:37.283 13:06:41 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:37.283 13:06:41 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:37.283 BaseBdev2 00:22:37.542 13:06:41 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:37.542 spare_malloc 00:22:37.542 13:06:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:38.109 spare_delay 00:22:38.109 13:06:41 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:38.109 [2024-04-17 13:06:42.253672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.109 [2024-04-17 13:06:42.253839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.109 [2024-04-17 13:06:42.253880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:38.109 [2024-04-17 13:06:42.253976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.367 [2024-04-17 13:06:42.256846] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.367 [2024-04-17 13:06:42.256917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:38.367 spare 00:22:38.367 13:06:42 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:38.658 [2024-04-17 13:06:42.521847] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:38.658 [2024-04-17 13:06:42.524059] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.658 [2024-04-17 13:06:42.524150] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:22:38.658 [2024-04-17 13:06:42.524163] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:38.658 [2024-04-17 13:06:42.524392] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:22:38.658 [2024-04-17 13:06:42.524821] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:22:38.658 [2024-04-17 13:06:42.524848] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:22:38.658 [2024-04-17 13:06:42.525063] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.658 13:06:42 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:38.658 13:06:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.658 13:06:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.658 13:06:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.658 13:06:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.659 "name": "raid_bdev1", 00:22:38.659 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:38.659 "strip_size_kb": 0, 00:22:38.659 "state": "online", 00:22:38.659 "raid_level": "raid1", 00:22:38.659 "superblock": false, 00:22:38.659 "num_base_bdevs": 2, 00:22:38.659 "num_base_bdevs_discovered": 2, 00:22:38.659 "num_base_bdevs_operational": 2, 00:22:38.659 "base_bdevs_list": [ 00:22:38.659 { 00:22:38.659 "name": "BaseBdev1", 00:22:38.659 "uuid": "fc170d62-6e2b-4739-a283-8c06a228a954", 00:22:38.659 "is_configured": true, 00:22:38.659 "data_offset": 0, 00:22:38.659 "data_size": 65536 00:22:38.659 }, 00:22:38.659 { 00:22:38.659 "name": "BaseBdev2", 00:22:38.659 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:38.659 "is_configured": true, 00:22:38.659 "data_offset": 0, 00:22:38.659 "data_size": 65536 00:22:38.659 } 00:22:38.659 ] 00:22:38.659 }' 00:22:38.659 13:06:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.659 13:06:42 -- common/autotest_common.sh@10 -- # set +x 00:22:39.591 13:06:43 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:39.591 13:06:43 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:39.591 [2024-04-17 13:06:43.682290] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:39.591 13:06:43 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:39.591 13:06:43 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:39.591 13:06:43 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.850 13:06:43 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:39.850 13:06:43 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:39.850 13:06:43 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:39.850 13:06:43 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@12 -- # local i 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:39.850 13:06:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:40.109 [2024-04-17 13:06:44.186190] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:40.109 /dev/nbd0 00:22:40.109 13:06:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:40.109 13:06:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:40.109 13:06:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:40.109 13:06:44 -- common/autotest_common.sh@855 -- # local i 00:22:40.109 13:06:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:40.109 13:06:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:40.109 13:06:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:40.109 13:06:44 -- common/autotest_common.sh@859 -- # break 00:22:40.109 13:06:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:40.109 13:06:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:40.109 13:06:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:40.109 1+0 records in 00:22:40.109 1+0 records out 00:22:40.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241361 s, 17.0 MB/s 00:22:40.109 13:06:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.109 13:06:44 -- common/autotest_common.sh@872 -- # size=4096 00:22:40.109 13:06:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:40.109 13:06:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:40.109 13:06:44 -- common/autotest_common.sh@875 -- # return 0 00:22:40.109 13:06:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:40.109 13:06:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:40.109 13:06:44 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:40.109 13:06:44 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:40.109 13:06:44 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:45.379 65536+0 records in 00:22:45.379 65536+0 records out 00:22:45.379 33554432 bytes (34 MB, 32 MiB) copied, 4.55343 s, 7.4 MB/s 00:22:45.379 13:06:48 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@51 -- # local i 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:45.380 13:06:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:45.380 [2024-04-17 13:06:49.060253] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@41 -- # break 00:22:45.380 13:06:49 -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:45.380 [2024-04-17 13:06:49.404154] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.380 13:06:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.638 13:06:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.638 "name": "raid_bdev1", 00:22:45.638 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:45.638 "strip_size_kb": 0, 00:22:45.638 "state": "online", 00:22:45.638 "raid_level": "raid1", 00:22:45.638 "superblock": false, 00:22:45.638 "num_base_bdevs": 2, 00:22:45.638 "num_base_bdevs_discovered": 1, 00:22:45.638 "num_base_bdevs_operational": 1, 00:22:45.638 "base_bdevs_list": [ 00:22:45.638 { 00:22:45.638 "name": null, 00:22:45.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.638 "is_configured": false, 00:22:45.638 "data_offset": 0, 00:22:45.638 "data_size": 65536 00:22:45.638 }, 00:22:45.638 { 00:22:45.638 "name": "BaseBdev2", 00:22:45.638 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:45.638 "is_configured": true, 00:22:45.638 "data_offset": 0, 00:22:45.638 "data_size": 65536 00:22:45.638 } 00:22:45.638 ] 00:22:45.638 }' 00:22:45.638 13:06:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.638 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:22:46.574 13:06:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.574 [2024-04-17 13:06:50.664466] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:46.574 [2024-04-17 13:06:50.664549] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.574 [2024-04-17 13:06:50.678423] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:22:46.574 [2024-04-17 13:06:50.680484] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.574 13:06:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.975 "name": "raid_bdev1", 00:22:47.975 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:47.975 "strip_size_kb": 0, 00:22:47.975 "state": "online", 00:22:47.975 "raid_level": "raid1", 00:22:47.975 "superblock": false, 00:22:47.975 "num_base_bdevs": 2, 00:22:47.975 "num_base_bdevs_discovered": 2, 00:22:47.975 "num_base_bdevs_operational": 2, 00:22:47.975 "process": { 00:22:47.975 "type": "rebuild", 00:22:47.975 "target": "spare", 00:22:47.975 "progress": { 00:22:47.975 "blocks": 24576, 00:22:47.975 "percent": 37 00:22:47.975 } 00:22:47.975 }, 00:22:47.975 "base_bdevs_list": [ 00:22:47.975 { 00:22:47.975 "name": "spare", 00:22:47.975 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:47.975 "is_configured": true, 00:22:47.975 "data_offset": 0, 00:22:47.975 "data_size": 65536 00:22:47.975 }, 00:22:47.975 { 00:22:47.975 "name": "BaseBdev2", 00:22:47.975 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:47.975 "is_configured": true, 00:22:47.975 "data_offset": 0, 00:22:47.975 "data_size": 65536 00:22:47.975 } 00:22:47.975 ] 00:22:47.975 }' 00:22:47.975 13:06:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.975 13:06:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.975 13:06:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.975 13:06:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.975 13:06:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:48.243 [2024-04-17 13:06:52.370204] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:48.501 [2024-04-17 13:06:52.389764] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:48.501 [2024-04-17 13:06:52.389876] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.501 13:06:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.759 13:06:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.759 "name": "raid_bdev1", 00:22:48.759 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:48.759 "strip_size_kb": 0, 00:22:48.759 "state": "online", 00:22:48.759 "raid_level": "raid1", 00:22:48.759 "superblock": false, 00:22:48.759 "num_base_bdevs": 2, 00:22:48.759 "num_base_bdevs_discovered": 1, 00:22:48.759 "num_base_bdevs_operational": 1, 00:22:48.759 "base_bdevs_list": [ 00:22:48.759 { 00:22:48.759 "name": null, 00:22:48.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.759 "is_configured": false, 00:22:48.759 "data_offset": 0, 00:22:48.759 "data_size": 65536 00:22:48.759 }, 00:22:48.759 { 00:22:48.759 "name": "BaseBdev2", 00:22:48.759 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:48.759 "is_configured": true, 00:22:48.759 "data_offset": 0, 00:22:48.759 "data_size": 65536 00:22:48.759 } 00:22:48.759 ] 00:22:48.759 }' 00:22:48.759 13:06:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.759 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.325 13:06:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.594 "name": "raid_bdev1", 00:22:49.594 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:49.594 "strip_size_kb": 0, 00:22:49.594 "state": "online", 00:22:49.594 "raid_level": "raid1", 00:22:49.594 "superblock": false, 00:22:49.594 "num_base_bdevs": 2, 00:22:49.594 "num_base_bdevs_discovered": 1, 00:22:49.594 "num_base_bdevs_operational": 1, 00:22:49.594 "base_bdevs_list": [ 00:22:49.594 { 00:22:49.594 "name": null, 00:22:49.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.594 "is_configured": false, 00:22:49.594 "data_offset": 0, 00:22:49.594 "data_size": 65536 00:22:49.594 }, 00:22:49.594 { 00:22:49.594 "name": "BaseBdev2", 00:22:49.594 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:49.594 "is_configured": true, 00:22:49.594 "data_offset": 0, 00:22:49.594 "data_size": 65536 00:22:49.594 } 00:22:49.594 ] 00:22:49.594 }' 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:49.594 13:06:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:49.865 [2024-04-17 13:06:53.928800] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:49.865 [2024-04-17 13:06:53.928865] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:49.865 [2024-04-17 13:06:53.941094] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:22:49.865 [2024-04-17 13:06:53.943074] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:49.865 13:06:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.240 13:06:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.240 "name": "raid_bdev1", 00:22:51.240 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:51.240 "strip_size_kb": 0, 00:22:51.240 "state": "online", 00:22:51.240 "raid_level": "raid1", 00:22:51.240 "superblock": false, 00:22:51.240 "num_base_bdevs": 2, 00:22:51.240 "num_base_bdevs_discovered": 2, 00:22:51.240 "num_base_bdevs_operational": 2, 00:22:51.240 "process": { 00:22:51.240 "type": "rebuild", 00:22:51.240 "target": "spare", 00:22:51.240 "progress": { 00:22:51.240 "blocks": 24576, 00:22:51.240 "percent": 37 00:22:51.240 } 00:22:51.240 }, 00:22:51.240 "base_bdevs_list": [ 00:22:51.240 { 00:22:51.240 "name": "spare", 00:22:51.240 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:51.240 "is_configured": true, 00:22:51.240 "data_offset": 0, 00:22:51.240 "data_size": 65536 00:22:51.240 }, 00:22:51.240 { 00:22:51.240 "name": "BaseBdev2", 00:22:51.240 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:51.240 "is_configured": true, 00:22:51.240 "data_offset": 0, 00:22:51.240 "data_size": 65536 00:22:51.240 } 00:22:51.240 ] 00:22:51.240 }' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:51.240 13:06:55 -- bdev/bdev_raid.sh@657 -- # local timeout=436 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.241 13:06:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.499 13:06:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.499 "name": "raid_bdev1", 00:22:51.499 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:51.499 "strip_size_kb": 0, 00:22:51.499 "state": "online", 00:22:51.499 "raid_level": "raid1", 00:22:51.499 "superblock": false, 00:22:51.499 "num_base_bdevs": 2, 00:22:51.499 "num_base_bdevs_discovered": 2, 00:22:51.499 "num_base_bdevs_operational": 2, 00:22:51.499 "process": { 00:22:51.499 "type": "rebuild", 00:22:51.499 "target": "spare", 00:22:51.499 "progress": { 00:22:51.499 "blocks": 30720, 00:22:51.499 "percent": 46 00:22:51.499 } 00:22:51.499 }, 00:22:51.499 "base_bdevs_list": [ 00:22:51.499 { 00:22:51.499 "name": "spare", 00:22:51.499 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:51.499 "is_configured": true, 00:22:51.499 "data_offset": 0, 00:22:51.499 "data_size": 65536 00:22:51.499 }, 00:22:51.499 { 00:22:51.499 "name": "BaseBdev2", 00:22:51.499 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:51.499 "is_configured": true, 00:22:51.499 "data_offset": 0, 00:22:51.499 "data_size": 65536 00:22:51.499 } 00:22:51.499 ] 00:22:51.499 }' 00:22:51.499 13:06:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.499 13:06:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.499 13:06:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.757 13:06:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.757 13:06:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:52.692 13:06:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.692 13:06:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.692 13:06:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.692 13:06:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.693 13:06:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.693 13:06:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.693 13:06:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.693 13:06:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.951 13:06:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.951 "name": "raid_bdev1", 00:22:52.951 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:52.951 "strip_size_kb": 0, 00:22:52.951 "state": "online", 00:22:52.951 "raid_level": "raid1", 00:22:52.951 "superblock": false, 00:22:52.951 "num_base_bdevs": 2, 00:22:52.951 "num_base_bdevs_discovered": 2, 00:22:52.951 "num_base_bdevs_operational": 2, 00:22:52.951 "process": { 00:22:52.951 "type": "rebuild", 00:22:52.951 "target": "spare", 00:22:52.951 "progress": { 00:22:52.951 "blocks": 59392, 00:22:52.951 "percent": 90 00:22:52.951 } 00:22:52.951 }, 00:22:52.951 "base_bdevs_list": [ 00:22:52.951 { 00:22:52.951 "name": "spare", 00:22:52.951 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:52.951 "is_configured": true, 00:22:52.951 "data_offset": 0, 00:22:52.951 "data_size": 65536 00:22:52.951 }, 00:22:52.951 { 00:22:52.951 "name": "BaseBdev2", 00:22:52.951 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:52.951 "is_configured": true, 00:22:52.951 "data_offset": 0, 00:22:52.951 "data_size": 65536 00:22:52.951 } 00:22:52.951 ] 00:22:52.951 }' 00:22:52.951 13:06:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.951 13:06:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.951 13:06:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.951 13:06:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.951 13:06:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:53.210 [2024-04-17 13:06:57.160441] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:53.210 [2024-04-17 13:06:57.160529] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:53.210 [2024-04-17 13:06:57.160600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.145 13:06:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.145 "name": "raid_bdev1", 00:22:54.145 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:54.145 "strip_size_kb": 0, 00:22:54.145 "state": "online", 00:22:54.145 "raid_level": "raid1", 00:22:54.145 "superblock": false, 00:22:54.145 "num_base_bdevs": 2, 00:22:54.145 "num_base_bdevs_discovered": 2, 00:22:54.145 "num_base_bdevs_operational": 2, 00:22:54.145 "base_bdevs_list": [ 00:22:54.145 { 00:22:54.145 "name": "spare", 00:22:54.145 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:54.145 "is_configured": true, 00:22:54.145 "data_offset": 0, 00:22:54.145 "data_size": 65536 00:22:54.145 }, 00:22:54.145 { 00:22:54.145 "name": "BaseBdev2", 00:22:54.145 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:54.145 "is_configured": true, 00:22:54.146 "data_offset": 0, 00:22:54.146 "data_size": 65536 00:22:54.146 } 00:22:54.146 ] 00:22:54.146 }' 00:22:54.146 13:06:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.146 13:06:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:54.146 13:06:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@660 -- # break 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.404 "name": "raid_bdev1", 00:22:54.404 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:54.404 "strip_size_kb": 0, 00:22:54.404 "state": "online", 00:22:54.404 "raid_level": "raid1", 00:22:54.404 "superblock": false, 00:22:54.404 "num_base_bdevs": 2, 00:22:54.404 "num_base_bdevs_discovered": 2, 00:22:54.404 "num_base_bdevs_operational": 2, 00:22:54.404 "base_bdevs_list": [ 00:22:54.404 { 00:22:54.404 "name": "spare", 00:22:54.404 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:54.404 "is_configured": true, 00:22:54.404 "data_offset": 0, 00:22:54.404 "data_size": 65536 00:22:54.404 }, 00:22:54.404 { 00:22:54.404 "name": "BaseBdev2", 00:22:54.404 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:54.404 "is_configured": true, 00:22:54.404 "data_offset": 0, 00:22:54.404 "data_size": 65536 00:22:54.404 } 00:22:54.404 ] 00:22:54.404 }' 00:22:54.404 13:06:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.663 13:06:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.921 13:06:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.921 "name": "raid_bdev1", 00:22:54.921 "uuid": "c84b725b-a408-48bc-9216-3f23f85a5c1f", 00:22:54.921 "strip_size_kb": 0, 00:22:54.921 "state": "online", 00:22:54.921 "raid_level": "raid1", 00:22:54.921 "superblock": false, 00:22:54.921 "num_base_bdevs": 2, 00:22:54.921 "num_base_bdevs_discovered": 2, 00:22:54.921 "num_base_bdevs_operational": 2, 00:22:54.921 "base_bdevs_list": [ 00:22:54.921 { 00:22:54.921 "name": "spare", 00:22:54.921 "uuid": "c913c8f6-6397-531a-8930-0861ee83e3a3", 00:22:54.921 "is_configured": true, 00:22:54.921 "data_offset": 0, 00:22:54.921 "data_size": 65536 00:22:54.921 }, 00:22:54.921 { 00:22:54.921 "name": "BaseBdev2", 00:22:54.921 "uuid": "265d0de0-2cf1-4737-bb42-d1bc97608f6b", 00:22:54.921 "is_configured": true, 00:22:54.921 "data_offset": 0, 00:22:54.921 "data_size": 65536 00:22:54.921 } 00:22:54.921 ] 00:22:54.921 }' 00:22:54.921 13:06:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.921 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.487 13:06:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.745 [2024-04-17 13:06:59.813249] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.745 [2024-04-17 13:06:59.813289] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.745 [2024-04-17 13:06:59.813386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.745 [2024-04-17 13:06:59.813451] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.745 [2024-04-17 13:06:59.813463] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:22:55.745 13:06:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.745 13:06:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:56.003 13:07:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:56.003 13:07:00 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:56.003 13:07:00 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@12 -- # local i 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.003 13:07:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:56.262 /dev/nbd0 00:22:56.262 13:07:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:56.262 13:07:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:56.262 13:07:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:56.262 13:07:00 -- common/autotest_common.sh@855 -- # local i 00:22:56.262 13:07:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:56.262 13:07:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:56.262 13:07:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:56.262 13:07:00 -- common/autotest_common.sh@859 -- # break 00:22:56.262 13:07:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:56.262 13:07:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:56.262 13:07:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.262 1+0 records in 00:22:56.262 1+0 records out 00:22:56.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416637 s, 9.8 MB/s 00:22:56.262 13:07:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.262 13:07:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:56.262 13:07:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.262 13:07:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:56.262 13:07:00 -- common/autotest_common.sh@875 -- # return 0 00:22:56.262 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.262 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.262 13:07:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:56.520 /dev/nbd1 00:22:56.520 13:07:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:56.520 13:07:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:56.520 13:07:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:56.520 13:07:00 -- common/autotest_common.sh@855 -- # local i 00:22:56.520 13:07:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:56.520 13:07:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:56.520 13:07:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:56.520 13:07:00 -- common/autotest_common.sh@859 -- # break 00:22:56.520 13:07:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:56.520 13:07:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:56.520 13:07:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.520 1+0 records in 00:22:56.520 1+0 records out 00:22:56.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391088 s, 10.5 MB/s 00:22:56.520 13:07:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.520 13:07:00 -- common/autotest_common.sh@872 -- # size=4096 00:22:56.520 13:07:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.520 13:07:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:56.520 13:07:00 -- common/autotest_common.sh@875 -- # return 0 00:22:56.520 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.520 13:07:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:56.520 13:07:00 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:56.778 13:07:00 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@51 -- # local i 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.778 13:07:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:57.036 13:07:00 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@41 -- # break 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@45 -- # return 0 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:57.036 13:07:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@41 -- # break 00:22:57.294 13:07:01 -- bdev/nbd_common.sh@45 -- # return 0 00:22:57.294 13:07:01 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:57.294 13:07:01 -- bdev/bdev_raid.sh@709 -- # killprocess 130551 00:22:57.294 13:07:01 -- common/autotest_common.sh@924 -- # '[' -z 130551 ']' 00:22:57.294 13:07:01 -- common/autotest_common.sh@928 -- # kill -0 130551 00:22:57.294 13:07:01 -- common/autotest_common.sh@929 -- # uname 00:22:57.294 13:07:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:57.294 13:07:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 130551 00:22:57.294 13:07:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:57.294 killing process with pid 130551 00:22:57.294 Received shutdown signal, test time was about 60.000000 seconds 00:22:57.294 00:22:57.294 Latency(us) 00:22:57.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.294 =================================================================================================================== 00:22:57.294 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.294 13:07:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:57.294 13:07:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 130551' 00:22:57.294 13:07:01 -- common/autotest_common.sh@943 -- # kill 130551 00:22:57.294 13:07:01 -- common/autotest_common.sh@948 -- # wait 130551 00:22:57.294 [2024-04-17 13:07:01.432934] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:57.552 [2024-04-17 13:07:01.664856] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.924 13:07:02 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:58.924 ************************************ 00:22:58.924 END TEST raid_rebuild_test 00:22:58.924 ************************************ 00:22:58.924 00:22:58.924 real 0m22.862s 00:22:58.924 user 0m31.819s 00:22:58.924 sys 0m3.541s 00:22:58.924 13:07:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:22:58.924 13:07:02 -- common/autotest_common.sh@10 -- # set +x 00:22:58.924 13:07:02 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:22:58.924 13:07:02 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:22:58.924 13:07:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:58.924 13:07:02 -- common/autotest_common.sh@10 -- # set +x 00:22:58.924 ************************************ 00:22:58.924 START TEST raid_rebuild_test_sb 00:22:58.924 ************************************ 00:22:58.924 13:07:02 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 2 true false 00:22:58.924 13:07:02 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@544 -- # raid_pid=131166 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131166 /var/tmp/spdk-raid.sock 00:22:58.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.925 13:07:02 -- common/autotest_common.sh@817 -- # '[' -z 131166 ']' 00:22:58.925 13:07:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.925 13:07:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.925 13:07:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.925 13:07:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.925 13:07:02 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:58.925 13:07:02 -- common/autotest_common.sh@10 -- # set +x 00:22:58.925 [2024-04-17 13:07:02.873040] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:22:58.925 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:58.925 Zero copy mechanism will not be used. 00:22:58.925 [2024-04-17 13:07:02.873264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131166 ] 00:22:58.925 [2024-04-17 13:07:03.041776] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.183 [2024-04-17 13:07:03.238639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.442 [2024-04-17 13:07:03.422947] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.701 13:07:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.701 13:07:03 -- common/autotest_common.sh@850 -- # return 0 00:22:59.701 13:07:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:59.701 13:07:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:59.701 13:07:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:59.959 BaseBdev1_malloc 00:23:00.216 13:07:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:00.216 [2024-04-17 13:07:04.319786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:00.216 [2024-04-17 13:07:04.319933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.216 [2024-04-17 13:07:04.319971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:00.217 [2024-04-17 13:07:04.320018] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.217 [2024-04-17 13:07:04.322352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.217 [2024-04-17 13:07:04.322418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:00.217 BaseBdev1 00:23:00.217 13:07:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:00.217 13:07:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:00.217 13:07:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:00.475 BaseBdev2_malloc 00:23:00.475 13:07:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:00.733 [2024-04-17 13:07:04.801923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:00.733 [2024-04-17 13:07:04.802057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.733 [2024-04-17 13:07:04.802104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:00.733 [2024-04-17 13:07:04.802162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.733 [2024-04-17 13:07:04.804822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.733 [2024-04-17 13:07:04.804890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:00.733 BaseBdev2 00:23:00.733 13:07:04 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:00.992 spare_malloc 00:23:00.992 13:07:05 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:01.309 spare_delay 00:23:01.309 13:07:05 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:01.567 [2024-04-17 13:07:05.484133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:01.567 [2024-04-17 13:07:05.484283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.567 [2024-04-17 13:07:05.484332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:01.567 [2024-04-17 13:07:05.484376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.567 [2024-04-17 13:07:05.487017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.567 [2024-04-17 13:07:05.487088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:01.567 spare 00:23:01.567 13:07:05 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:01.567 [2024-04-17 13:07:05.700204] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.567 [2024-04-17 13:07:05.702218] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:01.567 [2024-04-17 13:07:05.702517] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:01.567 [2024-04-17 13:07:05.702534] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:01.567 [2024-04-17 13:07:05.702685] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:01.567 [2024-04-17 13:07:05.703082] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:01.567 [2024-04-17 13:07:05.703116] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:01.567 [2024-04-17 13:07:05.703344] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.826 13:07:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.085 13:07:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.085 "name": "raid_bdev1", 00:23:02.085 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:02.085 "strip_size_kb": 0, 00:23:02.085 "state": "online", 00:23:02.085 "raid_level": "raid1", 00:23:02.085 "superblock": true, 00:23:02.085 "num_base_bdevs": 2, 00:23:02.085 "num_base_bdevs_discovered": 2, 00:23:02.085 "num_base_bdevs_operational": 2, 00:23:02.085 "base_bdevs_list": [ 00:23:02.085 { 00:23:02.085 "name": "BaseBdev1", 00:23:02.085 "uuid": "6ea86c60-a8b4-5e75-84ac-25b2dacc215f", 00:23:02.085 "is_configured": true, 00:23:02.085 "data_offset": 2048, 00:23:02.085 "data_size": 63488 00:23:02.085 }, 00:23:02.085 { 00:23:02.085 "name": "BaseBdev2", 00:23:02.085 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:02.085 "is_configured": true, 00:23:02.085 "data_offset": 2048, 00:23:02.085 "data_size": 63488 00:23:02.085 } 00:23:02.085 ] 00:23:02.085 }' 00:23:02.085 13:07:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.085 13:07:05 -- common/autotest_common.sh@10 -- # set +x 00:23:02.652 13:07:06 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:02.652 13:07:06 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:02.924 [2024-04-17 13:07:06.968911] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.924 13:07:06 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:02.924 13:07:06 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.924 13:07:06 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:03.186 13:07:07 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:03.186 13:07:07 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:03.186 13:07:07 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:03.186 13:07:07 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@12 -- # local i 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.186 13:07:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:03.445 [2024-04-17 13:07:07.468691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:03.445 /dev/nbd0 00:23:03.445 13:07:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:03.445 13:07:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:03.445 13:07:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:03.445 13:07:07 -- common/autotest_common.sh@855 -- # local i 00:23:03.445 13:07:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:03.445 13:07:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:03.445 13:07:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:03.445 13:07:07 -- common/autotest_common.sh@859 -- # break 00:23:03.445 13:07:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:03.445 13:07:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:03.445 13:07:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.445 1+0 records in 00:23:03.445 1+0 records out 00:23:03.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254097 s, 16.1 MB/s 00:23:03.445 13:07:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.445 13:07:07 -- common/autotest_common.sh@872 -- # size=4096 00:23:03.445 13:07:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.445 13:07:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:03.445 13:07:07 -- common/autotest_common.sh@875 -- # return 0 00:23:03.445 13:07:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.445 13:07:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.445 13:07:07 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:03.445 13:07:07 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:03.445 13:07:07 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:08.768 63488+0 records in 00:23:08.768 63488+0 records out 00:23:08.768 32505856 bytes (33 MB, 31 MiB) copied, 5.38957 s, 6.0 MB/s 00:23:08.768 13:07:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:08.768 13:07:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:08.768 13:07:12 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:09.026 13:07:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:09.026 13:07:12 -- bdev/nbd_common.sh@51 -- # local i 00:23:09.026 13:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.026 13:07:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:09.285 [2024-04-17 13:07:13.195921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@41 -- # break 00:23:09.285 13:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:09.285 [2024-04-17 13:07:13.399562] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.285 13:07:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.544 13:07:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.544 "name": "raid_bdev1", 00:23:09.544 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:09.544 "strip_size_kb": 0, 00:23:09.544 "state": "online", 00:23:09.544 "raid_level": "raid1", 00:23:09.544 "superblock": true, 00:23:09.544 "num_base_bdevs": 2, 00:23:09.544 "num_base_bdevs_discovered": 1, 00:23:09.544 "num_base_bdevs_operational": 1, 00:23:09.544 "base_bdevs_list": [ 00:23:09.544 { 00:23:09.544 "name": null, 00:23:09.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.544 "is_configured": false, 00:23:09.544 "data_offset": 2048, 00:23:09.544 "data_size": 63488 00:23:09.544 }, 00:23:09.544 { 00:23:09.544 "name": "BaseBdev2", 00:23:09.544 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:09.544 "is_configured": true, 00:23:09.544 "data_offset": 2048, 00:23:09.544 "data_size": 63488 00:23:09.544 } 00:23:09.544 ] 00:23:09.544 }' 00:23:09.544 13:07:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.544 13:07:13 -- common/autotest_common.sh@10 -- # set +x 00:23:10.479 13:07:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:10.480 [2024-04-17 13:07:14.611855] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:10.480 [2024-04-17 13:07:14.611931] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.738 [2024-04-17 13:07:14.626214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:23:10.738 [2024-04-17 13:07:14.628262] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.738 13:07:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.673 13:07:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.932 13:07:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:11.932 "name": "raid_bdev1", 00:23:11.932 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:11.932 "strip_size_kb": 0, 00:23:11.932 "state": "online", 00:23:11.932 "raid_level": "raid1", 00:23:11.932 "superblock": true, 00:23:11.932 "num_base_bdevs": 2, 00:23:11.932 "num_base_bdevs_discovered": 2, 00:23:11.932 "num_base_bdevs_operational": 2, 00:23:11.932 "process": { 00:23:11.932 "type": "rebuild", 00:23:11.932 "target": "spare", 00:23:11.932 "progress": { 00:23:11.932 "blocks": 24576, 00:23:11.932 "percent": 38 00:23:11.932 } 00:23:11.932 }, 00:23:11.932 "base_bdevs_list": [ 00:23:11.932 { 00:23:11.932 "name": "spare", 00:23:11.932 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:11.932 "is_configured": true, 00:23:11.932 "data_offset": 2048, 00:23:11.932 "data_size": 63488 00:23:11.932 }, 00:23:11.932 { 00:23:11.932 "name": "BaseBdev2", 00:23:11.932 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:11.932 "is_configured": true, 00:23:11.932 "data_offset": 2048, 00:23:11.932 "data_size": 63488 00:23:11.932 } 00:23:11.932 ] 00:23:11.932 }' 00:23:11.932 13:07:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:11.932 13:07:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.932 13:07:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:11.932 13:07:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.932 13:07:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:12.192 [2024-04-17 13:07:16.254570] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.458 [2024-04-17 13:07:16.337519] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:12.459 [2024-04-17 13:07:16.337632] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.459 13:07:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.717 13:07:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.717 "name": "raid_bdev1", 00:23:12.717 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:12.717 "strip_size_kb": 0, 00:23:12.717 "state": "online", 00:23:12.717 "raid_level": "raid1", 00:23:12.717 "superblock": true, 00:23:12.717 "num_base_bdevs": 2, 00:23:12.717 "num_base_bdevs_discovered": 1, 00:23:12.717 "num_base_bdevs_operational": 1, 00:23:12.717 "base_bdevs_list": [ 00:23:12.717 { 00:23:12.717 "name": null, 00:23:12.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.717 "is_configured": false, 00:23:12.717 "data_offset": 2048, 00:23:12.717 "data_size": 63488 00:23:12.717 }, 00:23:12.717 { 00:23:12.717 "name": "BaseBdev2", 00:23:12.717 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:12.717 "is_configured": true, 00:23:12.717 "data_offset": 2048, 00:23:12.717 "data_size": 63488 00:23:12.717 } 00:23:12.717 ] 00:23:12.717 }' 00:23:12.717 13:07:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.717 13:07:16 -- common/autotest_common.sh@10 -- # set +x 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.326 13:07:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.585 13:07:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:13.585 "name": "raid_bdev1", 00:23:13.585 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:13.585 "strip_size_kb": 0, 00:23:13.585 "state": "online", 00:23:13.585 "raid_level": "raid1", 00:23:13.585 "superblock": true, 00:23:13.585 "num_base_bdevs": 2, 00:23:13.585 "num_base_bdevs_discovered": 1, 00:23:13.585 "num_base_bdevs_operational": 1, 00:23:13.585 "base_bdevs_list": [ 00:23:13.585 { 00:23:13.585 "name": null, 00:23:13.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.585 "is_configured": false, 00:23:13.585 "data_offset": 2048, 00:23:13.585 "data_size": 63488 00:23:13.585 }, 00:23:13.585 { 00:23:13.585 "name": "BaseBdev2", 00:23:13.585 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:13.585 "is_configured": true, 00:23:13.585 "data_offset": 2048, 00:23:13.585 "data_size": 63488 00:23:13.585 } 00:23:13.585 ] 00:23:13.585 }' 00:23:13.585 13:07:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:13.585 13:07:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:13.585 13:07:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:13.845 13:07:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:13.845 13:07:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:14.104 [2024-04-17 13:07:17.995563] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:14.104 [2024-04-17 13:07:17.995648] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:14.104 [2024-04-17 13:07:18.009155] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:23:14.104 [2024-04-17 13:07:18.011126] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:14.104 13:07:18 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.039 13:07:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:15.298 "name": "raid_bdev1", 00:23:15.298 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:15.298 "strip_size_kb": 0, 00:23:15.298 "state": "online", 00:23:15.298 "raid_level": "raid1", 00:23:15.298 "superblock": true, 00:23:15.298 "num_base_bdevs": 2, 00:23:15.298 "num_base_bdevs_discovered": 2, 00:23:15.298 "num_base_bdevs_operational": 2, 00:23:15.298 "process": { 00:23:15.298 "type": "rebuild", 00:23:15.298 "target": "spare", 00:23:15.298 "progress": { 00:23:15.298 "blocks": 24576, 00:23:15.298 "percent": 38 00:23:15.298 } 00:23:15.298 }, 00:23:15.298 "base_bdevs_list": [ 00:23:15.298 { 00:23:15.298 "name": "spare", 00:23:15.298 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 }, 00:23:15.298 { 00:23:15.298 "name": "BaseBdev2", 00:23:15.298 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:15.298 "is_configured": true, 00:23:15.298 "data_offset": 2048, 00:23:15.298 "data_size": 63488 00:23:15.298 } 00:23:15.298 ] 00:23:15.298 }' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:15.298 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@657 -- # local timeout=460 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.298 13:07:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.557 13:07:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:15.557 "name": "raid_bdev1", 00:23:15.557 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:15.557 "strip_size_kb": 0, 00:23:15.557 "state": "online", 00:23:15.557 "raid_level": "raid1", 00:23:15.557 "superblock": true, 00:23:15.557 "num_base_bdevs": 2, 00:23:15.557 "num_base_bdevs_discovered": 2, 00:23:15.557 "num_base_bdevs_operational": 2, 00:23:15.557 "process": { 00:23:15.557 "type": "rebuild", 00:23:15.557 "target": "spare", 00:23:15.557 "progress": { 00:23:15.557 "blocks": 32768, 00:23:15.557 "percent": 51 00:23:15.557 } 00:23:15.557 }, 00:23:15.557 "base_bdevs_list": [ 00:23:15.557 { 00:23:15.557 "name": "spare", 00:23:15.557 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:15.557 "is_configured": true, 00:23:15.557 "data_offset": 2048, 00:23:15.557 "data_size": 63488 00:23:15.557 }, 00:23:15.557 { 00:23:15.557 "name": "BaseBdev2", 00:23:15.557 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:15.557 "is_configured": true, 00:23:15.557 "data_offset": 2048, 00:23:15.557 "data_size": 63488 00:23:15.557 } 00:23:15.557 ] 00:23:15.557 }' 00:23:15.557 13:07:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:15.816 13:07:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.816 13:07:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:15.816 13:07:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.816 13:07:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.752 13:07:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:17.011 "name": "raid_bdev1", 00:23:17.011 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:17.011 "strip_size_kb": 0, 00:23:17.011 "state": "online", 00:23:17.011 "raid_level": "raid1", 00:23:17.011 "superblock": true, 00:23:17.011 "num_base_bdevs": 2, 00:23:17.011 "num_base_bdevs_discovered": 2, 00:23:17.011 "num_base_bdevs_operational": 2, 00:23:17.011 "process": { 00:23:17.011 "type": "rebuild", 00:23:17.011 "target": "spare", 00:23:17.011 "progress": { 00:23:17.011 "blocks": 59392, 00:23:17.011 "percent": 93 00:23:17.011 } 00:23:17.011 }, 00:23:17.011 "base_bdevs_list": [ 00:23:17.011 { 00:23:17.011 "name": "spare", 00:23:17.011 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:17.011 "is_configured": true, 00:23:17.011 "data_offset": 2048, 00:23:17.011 "data_size": 63488 00:23:17.011 }, 00:23:17.011 { 00:23:17.011 "name": "BaseBdev2", 00:23:17.011 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:17.011 "is_configured": true, 00:23:17.011 "data_offset": 2048, 00:23:17.011 "data_size": 63488 00:23:17.011 } 00:23:17.011 ] 00:23:17.011 }' 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:17.011 [2024-04-17 13:07:21.128108] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:17.011 [2024-04-17 13:07:21.128209] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:17.011 [2024-04-17 13:07:21.128353] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.011 13:07:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.386 13:07:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.386 "name": "raid_bdev1", 00:23:18.386 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:18.386 "strip_size_kb": 0, 00:23:18.386 "state": "online", 00:23:18.386 "raid_level": "raid1", 00:23:18.386 "superblock": true, 00:23:18.386 "num_base_bdevs": 2, 00:23:18.386 "num_base_bdevs_discovered": 2, 00:23:18.386 "num_base_bdevs_operational": 2, 00:23:18.386 "base_bdevs_list": [ 00:23:18.386 { 00:23:18.386 "name": "spare", 00:23:18.386 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:18.386 "is_configured": true, 00:23:18.386 "data_offset": 2048, 00:23:18.386 "data_size": 63488 00:23:18.386 }, 00:23:18.386 { 00:23:18.386 "name": "BaseBdev2", 00:23:18.386 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:18.386 "is_configured": true, 00:23:18.387 "data_offset": 2048, 00:23:18.387 "data_size": 63488 00:23:18.387 } 00:23:18.387 ] 00:23:18.387 }' 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@660 -- # break 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.387 13:07:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.646 13:07:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.646 "name": "raid_bdev1", 00:23:18.646 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:18.646 "strip_size_kb": 0, 00:23:18.646 "state": "online", 00:23:18.646 "raid_level": "raid1", 00:23:18.646 "superblock": true, 00:23:18.646 "num_base_bdevs": 2, 00:23:18.646 "num_base_bdevs_discovered": 2, 00:23:18.646 "num_base_bdevs_operational": 2, 00:23:18.646 "base_bdevs_list": [ 00:23:18.646 { 00:23:18.646 "name": "spare", 00:23:18.646 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:18.646 "is_configured": true, 00:23:18.646 "data_offset": 2048, 00:23:18.646 "data_size": 63488 00:23:18.646 }, 00:23:18.646 { 00:23:18.646 "name": "BaseBdev2", 00:23:18.646 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:18.646 "is_configured": true, 00:23:18.646 "data_offset": 2048, 00:23:18.646 "data_size": 63488 00:23:18.646 } 00:23:18.646 ] 00:23:18.646 }' 00:23:18.646 13:07:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.646 13:07:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:18.646 13:07:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.904 13:07:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.163 13:07:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.163 "name": "raid_bdev1", 00:23:19.163 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:19.163 "strip_size_kb": 0, 00:23:19.163 "state": "online", 00:23:19.163 "raid_level": "raid1", 00:23:19.163 "superblock": true, 00:23:19.163 "num_base_bdevs": 2, 00:23:19.163 "num_base_bdevs_discovered": 2, 00:23:19.163 "num_base_bdevs_operational": 2, 00:23:19.163 "base_bdevs_list": [ 00:23:19.163 { 00:23:19.163 "name": "spare", 00:23:19.163 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:19.163 "is_configured": true, 00:23:19.163 "data_offset": 2048, 00:23:19.163 "data_size": 63488 00:23:19.163 }, 00:23:19.163 { 00:23:19.163 "name": "BaseBdev2", 00:23:19.163 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:19.163 "is_configured": true, 00:23:19.163 "data_offset": 2048, 00:23:19.163 "data_size": 63488 00:23:19.163 } 00:23:19.163 ] 00:23:19.163 }' 00:23:19.163 13:07:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.163 13:07:23 -- common/autotest_common.sh@10 -- # set +x 00:23:19.731 13:07:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:19.989 [2024-04-17 13:07:24.079998] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.989 [2024-04-17 13:07:24.080034] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.989 [2024-04-17 13:07:24.080144] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.989 [2024-04-17 13:07:24.080250] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.990 [2024-04-17 13:07:24.080279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:23:19.990 13:07:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.990 13:07:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:20.248 13:07:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:20.248 13:07:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:20.248 13:07:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@12 -- # local i 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.248 13:07:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:20.507 /dev/nbd0 00:23:20.507 13:07:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:20.765 13:07:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:20.765 13:07:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:20.765 13:07:24 -- common/autotest_common.sh@855 -- # local i 00:23:20.765 13:07:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:20.765 13:07:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:20.765 13:07:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:20.765 13:07:24 -- common/autotest_common.sh@859 -- # break 00:23:20.765 13:07:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:20.765 13:07:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:20.765 13:07:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:20.765 1+0 records in 00:23:20.765 1+0 records out 00:23:20.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387131 s, 10.6 MB/s 00:23:20.765 13:07:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.765 13:07:24 -- common/autotest_common.sh@872 -- # size=4096 00:23:20.765 13:07:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.765 13:07:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:20.765 13:07:24 -- common/autotest_common.sh@875 -- # return 0 00:23:20.765 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:20.765 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.765 13:07:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:21.024 /dev/nbd1 00:23:21.024 13:07:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:21.024 13:07:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:21.024 13:07:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:21.024 13:07:24 -- common/autotest_common.sh@855 -- # local i 00:23:21.024 13:07:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:21.024 13:07:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:21.024 13:07:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:21.024 13:07:24 -- common/autotest_common.sh@859 -- # break 00:23:21.024 13:07:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:21.025 13:07:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:21.025 13:07:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.025 1+0 records in 00:23:21.025 1+0 records out 00:23:21.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260031 s, 15.8 MB/s 00:23:21.025 13:07:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.025 13:07:24 -- common/autotest_common.sh@872 -- # size=4096 00:23:21.025 13:07:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.025 13:07:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:21.025 13:07:24 -- common/autotest_common.sh@875 -- # return 0 00:23:21.025 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.025 13:07:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.025 13:07:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:21.025 13:07:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@51 -- # local i 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.025 13:07:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:21.284 13:07:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@41 -- # break 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@45 -- # return 0 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.543 13:07:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@41 -- # break 00:23:21.802 13:07:25 -- bdev/nbd_common.sh@45 -- # return 0 00:23:21.802 13:07:25 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:21.802 13:07:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:21.802 13:07:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:21.802 13:07:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:22.061 13:07:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:22.319 [2024-04-17 13:07:26.300667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:22.319 [2024-04-17 13:07:26.300782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.319 [2024-04-17 13:07:26.300816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:22.319 [2024-04-17 13:07:26.300842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.319 [2024-04-17 13:07:26.303019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.319 [2024-04-17 13:07:26.303086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:22.319 [2024-04-17 13:07:26.303219] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:22.320 [2024-04-17 13:07:26.303333] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.320 BaseBdev1 00:23:22.320 13:07:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:22.320 13:07:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:22.320 13:07:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:22.578 13:07:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:22.578 [2024-04-17 13:07:26.700798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:22.578 [2024-04-17 13:07:26.700909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.578 [2024-04-17 13:07:26.700942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:22.578 [2024-04-17 13:07:26.700971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.578 [2024-04-17 13:07:26.701446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.578 [2024-04-17 13:07:26.701507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:22.579 [2024-04-17 13:07:26.701641] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:22.579 [2024-04-17 13:07:26.701657] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:22.579 [2024-04-17 13:07:26.701664] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.579 [2024-04-17 13:07:26.701691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:23:22.579 [2024-04-17 13:07:26.701761] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.579 BaseBdev2 00:23:22.579 13:07:26 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:22.838 13:07:26 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:23.097 [2024-04-17 13:07:27.108925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:23.097 [2024-04-17 13:07:27.109077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.097 [2024-04-17 13:07:27.109116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:23.097 [2024-04-17 13:07:27.109139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.097 [2024-04-17 13:07:27.109663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.097 [2024-04-17 13:07:27.109721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:23.097 [2024-04-17 13:07:27.109869] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:23.097 [2024-04-17 13:07:27.109916] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:23.097 spare 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.097 13:07:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.097 [2024-04-17 13:07:27.210046] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:23:23.097 [2024-04-17 13:07:27.210098] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:23.097 [2024-04-17 13:07:27.210310] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:23:23.097 [2024-04-17 13:07:27.210741] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:23:23.097 [2024-04-17 13:07:27.210765] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:23:23.097 [2024-04-17 13:07:27.210939] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.396 13:07:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.396 "name": "raid_bdev1", 00:23:23.396 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:23.396 "strip_size_kb": 0, 00:23:23.396 "state": "online", 00:23:23.396 "raid_level": "raid1", 00:23:23.396 "superblock": true, 00:23:23.396 "num_base_bdevs": 2, 00:23:23.396 "num_base_bdevs_discovered": 2, 00:23:23.396 "num_base_bdevs_operational": 2, 00:23:23.396 "base_bdevs_list": [ 00:23:23.396 { 00:23:23.396 "name": "spare", 00:23:23.396 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:23.396 "is_configured": true, 00:23:23.396 "data_offset": 2048, 00:23:23.396 "data_size": 63488 00:23:23.396 }, 00:23:23.396 { 00:23:23.396 "name": "BaseBdev2", 00:23:23.396 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:23.396 "is_configured": true, 00:23:23.396 "data_offset": 2048, 00:23:23.396 "data_size": 63488 00:23:23.396 } 00:23:23.396 ] 00:23:23.396 }' 00:23:23.396 13:07:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.396 13:07:27 -- common/autotest_common.sh@10 -- # set +x 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.963 13:07:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.222 "name": "raid_bdev1", 00:23:24.222 "uuid": "406f3fa5-b723-4117-a1e7-cfb7971cc3a0", 00:23:24.222 "strip_size_kb": 0, 00:23:24.222 "state": "online", 00:23:24.222 "raid_level": "raid1", 00:23:24.222 "superblock": true, 00:23:24.222 "num_base_bdevs": 2, 00:23:24.222 "num_base_bdevs_discovered": 2, 00:23:24.222 "num_base_bdevs_operational": 2, 00:23:24.222 "base_bdevs_list": [ 00:23:24.222 { 00:23:24.222 "name": "spare", 00:23:24.222 "uuid": "782b999a-0902-53cc-84b9-197a27b77c16", 00:23:24.222 "is_configured": true, 00:23:24.222 "data_offset": 2048, 00:23:24.222 "data_size": 63488 00:23:24.222 }, 00:23:24.222 { 00:23:24.222 "name": "BaseBdev2", 00:23:24.222 "uuid": "2ea92789-05d9-59b4-b311-580d8b4dcf02", 00:23:24.222 "is_configured": true, 00:23:24.222 "data_offset": 2048, 00:23:24.222 "data_size": 63488 00:23:24.222 } 00:23:24.222 ] 00:23:24.222 }' 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:24.222 13:07:28 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.481 13:07:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.481 13:07:28 -- bdev/bdev_raid.sh@709 -- # killprocess 131166 00:23:24.481 13:07:28 -- common/autotest_common.sh@924 -- # '[' -z 131166 ']' 00:23:24.481 13:07:28 -- common/autotest_common.sh@928 -- # kill -0 131166 00:23:24.481 13:07:28 -- common/autotest_common.sh@929 -- # uname 00:23:24.481 13:07:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:24.481 13:07:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 131166 00:23:24.481 13:07:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:24.481 killing process with pid 131166 00:23:24.481 Received shutdown signal, test time was about 60.000000 seconds 00:23:24.481 00:23:24.481 Latency(us) 00:23:24.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.482 =================================================================================================================== 00:23:24.482 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.482 13:07:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:24.482 13:07:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 131166' 00:23:24.482 13:07:28 -- common/autotest_common.sh@943 -- # kill 131166 00:23:24.482 [2024-04-17 13:07:28.568473] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:24.482 [2024-04-17 13:07:28.568571] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.482 [2024-04-17 13:07:28.568635] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.482 [2024-04-17 13:07:28.568656] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:23:24.482 13:07:28 -- common/autotest_common.sh@948 -- # wait 131166 00:23:24.740 [2024-04-17 13:07:28.776749] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:26.121 00:23:26.121 real 0m27.025s 00:23:26.121 user 0m39.431s 00:23:26.121 sys 0m4.300s 00:23:26.121 13:07:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:23:26.121 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.121 ************************************ 00:23:26.121 END TEST raid_rebuild_test_sb 00:23:26.121 ************************************ 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:23:26.121 13:07:29 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:23:26.121 13:07:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:26.121 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.121 ************************************ 00:23:26.121 START TEST raid_rebuild_test_io 00:23:26.121 ************************************ 00:23:26.121 13:07:29 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 2 false true 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@544 -- # raid_pid=131844 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131844 /var/tmp/spdk-raid.sock 00:23:26.121 13:07:29 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:26.121 13:07:29 -- common/autotest_common.sh@817 -- # '[' -z 131844 ']' 00:23:26.121 13:07:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.121 13:07:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.121 13:07:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.121 13:07:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.121 13:07:29 -- common/autotest_common.sh@10 -- # set +x 00:23:26.121 [2024-04-17 13:07:29.976552] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:23:26.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:26.121 Zero copy mechanism will not be used. 00:23:26.121 [2024-04-17 13:07:29.976765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131844 ] 00:23:26.121 [2024-04-17 13:07:30.130886] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.380 [2024-04-17 13:07:30.312490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.380 [2024-04-17 13:07:30.501835] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.948 13:07:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:26.948 13:07:30 -- common/autotest_common.sh@850 -- # return 0 00:23:26.948 13:07:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:26.948 13:07:30 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:26.948 13:07:30 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:27.206 BaseBdev1 00:23:27.206 13:07:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:27.206 13:07:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:27.206 13:07:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:27.465 BaseBdev2 00:23:27.465 13:07:31 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:27.725 spare_malloc 00:23:27.725 13:07:31 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.984 spare_delay 00:23:27.984 13:07:31 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:28.243 [2024-04-17 13:07:32.209071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:28.243 [2024-04-17 13:07:32.209235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.243 [2024-04-17 13:07:32.209310] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:28.243 [2024-04-17 13:07:32.209388] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.243 [2024-04-17 13:07:32.212139] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.243 [2024-04-17 13:07:32.212216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:28.243 spare 00:23:28.243 13:07:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:28.502 [2024-04-17 13:07:32.421220] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.502 [2024-04-17 13:07:32.423268] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.502 [2024-04-17 13:07:32.423381] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:23:28.502 [2024-04-17 13:07:32.423435] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:28.502 [2024-04-17 13:07:32.423663] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:28.502 [2024-04-17 13:07:32.424139] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:23:28.502 [2024-04-17 13:07:32.424182] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:23:28.502 [2024-04-17 13:07:32.424458] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.502 13:07:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.503 13:07:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.503 13:07:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.503 13:07:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.762 13:07:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.762 "name": "raid_bdev1", 00:23:28.762 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:28.762 "strip_size_kb": 0, 00:23:28.762 "state": "online", 00:23:28.762 "raid_level": "raid1", 00:23:28.762 "superblock": false, 00:23:28.762 "num_base_bdevs": 2, 00:23:28.762 "num_base_bdevs_discovered": 2, 00:23:28.762 "num_base_bdevs_operational": 2, 00:23:28.762 "base_bdevs_list": [ 00:23:28.762 { 00:23:28.762 "name": "BaseBdev1", 00:23:28.762 "uuid": "8c9a7708-198a-4412-84f3-5e5865d3b820", 00:23:28.762 "is_configured": true, 00:23:28.762 "data_offset": 0, 00:23:28.762 "data_size": 65536 00:23:28.762 }, 00:23:28.762 { 00:23:28.762 "name": "BaseBdev2", 00:23:28.762 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:28.762 "is_configured": true, 00:23:28.762 "data_offset": 0, 00:23:28.762 "data_size": 65536 00:23:28.762 } 00:23:28.762 ] 00:23:28.762 }' 00:23:28.762 13:07:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.762 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:23:29.702 13:07:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.702 13:07:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:29.702 [2024-04-17 13:07:33.749781] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.702 13:07:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:29.702 13:07:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.702 13:07:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.961 13:07:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:29.961 13:07:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:29.961 13:07:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:29.961 13:07:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:30.220 [2024-04-17 13:07:34.121199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:30.220 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:30.220 Zero copy mechanism will not be used. 00:23:30.220 Running I/O for 60 seconds... 00:23:30.220 [2024-04-17 13:07:34.273327] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.220 [2024-04-17 13:07:34.273548] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.221 13:07:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.479 13:07:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:30.479 "name": "raid_bdev1", 00:23:30.479 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:30.479 "strip_size_kb": 0, 00:23:30.479 "state": "online", 00:23:30.479 "raid_level": "raid1", 00:23:30.479 "superblock": false, 00:23:30.479 "num_base_bdevs": 2, 00:23:30.479 "num_base_bdevs_discovered": 1, 00:23:30.479 "num_base_bdevs_operational": 1, 00:23:30.479 "base_bdevs_list": [ 00:23:30.479 { 00:23:30.479 "name": null, 00:23:30.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.479 "is_configured": false, 00:23:30.479 "data_offset": 0, 00:23:30.479 "data_size": 65536 00:23:30.479 }, 00:23:30.479 { 00:23:30.479 "name": "BaseBdev2", 00:23:30.479 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:30.479 "is_configured": true, 00:23:30.479 "data_offset": 0, 00:23:30.479 "data_size": 65536 00:23:30.479 } 00:23:30.479 ] 00:23:30.479 }' 00:23:30.479 13:07:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:30.479 13:07:34 -- common/autotest_common.sh@10 -- # set +x 00:23:31.415 13:07:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.415 [2024-04-17 13:07:35.406665] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:31.415 [2024-04-17 13:07:35.406744] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.415 13:07:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:31.415 [2024-04-17 13:07:35.460488] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:31.415 [2024-04-17 13:07:35.462451] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.675 [2024-04-17 13:07:35.606386] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:31.933 [2024-04-17 13:07:36.022522] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:32.191 [2024-04-17 13:07:36.232728] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:32.192 [2024-04-17 13:07:36.233105] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.450 13:07:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.450 [2024-04-17 13:07:36.458273] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:32.450 [2024-04-17 13:07:36.458694] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:32.710 [2024-04-17 13:07:36.675876] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:32.710 [2024-04-17 13:07:36.676217] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:32.710 "name": "raid_bdev1", 00:23:32.710 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:32.710 "strip_size_kb": 0, 00:23:32.710 "state": "online", 00:23:32.710 "raid_level": "raid1", 00:23:32.710 "superblock": false, 00:23:32.710 "num_base_bdevs": 2, 00:23:32.710 "num_base_bdevs_discovered": 2, 00:23:32.710 "num_base_bdevs_operational": 2, 00:23:32.710 "process": { 00:23:32.710 "type": "rebuild", 00:23:32.710 "target": "spare", 00:23:32.710 "progress": { 00:23:32.710 "blocks": 16384, 00:23:32.710 "percent": 25 00:23:32.710 } 00:23:32.710 }, 00:23:32.710 "base_bdevs_list": [ 00:23:32.710 { 00:23:32.710 "name": "spare", 00:23:32.710 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:32.710 "is_configured": true, 00:23:32.710 "data_offset": 0, 00:23:32.710 "data_size": 65536 00:23:32.710 }, 00:23:32.710 { 00:23:32.710 "name": "BaseBdev2", 00:23:32.710 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:32.710 "is_configured": true, 00:23:32.710 "data_offset": 0, 00:23:32.710 "data_size": 65536 00:23:32.710 } 00:23:32.710 ] 00:23:32.710 }' 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.710 13:07:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:32.968 [2024-04-17 13:07:37.070709] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.228 [2024-04-17 13:07:37.130589] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:33.228 [2024-04-17 13:07:37.239482] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.228 [2024-04-17 13:07:37.241707] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.228 [2024-04-17 13:07:37.274875] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.228 13:07:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.488 13:07:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.488 "name": "raid_bdev1", 00:23:33.488 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:33.488 "strip_size_kb": 0, 00:23:33.488 "state": "online", 00:23:33.488 "raid_level": "raid1", 00:23:33.488 "superblock": false, 00:23:33.488 "num_base_bdevs": 2, 00:23:33.488 "num_base_bdevs_discovered": 1, 00:23:33.488 "num_base_bdevs_operational": 1, 00:23:33.488 "base_bdevs_list": [ 00:23:33.488 { 00:23:33.488 "name": null, 00:23:33.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.488 "is_configured": false, 00:23:33.488 "data_offset": 0, 00:23:33.488 "data_size": 65536 00:23:33.488 }, 00:23:33.488 { 00:23:33.488 "name": "BaseBdev2", 00:23:33.488 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:33.488 "is_configured": true, 00:23:33.488 "data_offset": 0, 00:23:33.488 "data_size": 65536 00:23:33.488 } 00:23:33.488 ] 00:23:33.488 }' 00:23:33.488 13:07:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.488 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.462 13:07:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.721 "name": "raid_bdev1", 00:23:34.721 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:34.721 "strip_size_kb": 0, 00:23:34.721 "state": "online", 00:23:34.721 "raid_level": "raid1", 00:23:34.721 "superblock": false, 00:23:34.721 "num_base_bdevs": 2, 00:23:34.721 "num_base_bdevs_discovered": 1, 00:23:34.721 "num_base_bdevs_operational": 1, 00:23:34.721 "base_bdevs_list": [ 00:23:34.721 { 00:23:34.721 "name": null, 00:23:34.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.721 "is_configured": false, 00:23:34.721 "data_offset": 0, 00:23:34.721 "data_size": 65536 00:23:34.721 }, 00:23:34.721 { 00:23:34.721 "name": "BaseBdev2", 00:23:34.721 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:34.721 "is_configured": true, 00:23:34.721 "data_offset": 0, 00:23:34.721 "data_size": 65536 00:23:34.721 } 00:23:34.721 ] 00:23:34.721 }' 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:34.721 13:07:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:35.006 [2024-04-17 13:07:38.933980] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:35.006 [2024-04-17 13:07:38.934052] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.006 13:07:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:35.006 [2024-04-17 13:07:38.998418] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:35.006 [2024-04-17 13:07:39.000570] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.006 [2024-04-17 13:07:39.110179] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:35.006 [2024-04-17 13:07:39.110654] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:35.275 [2024-04-17 13:07:39.246525] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:35.275 [2024-04-17 13:07:39.246744] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:35.534 [2024-04-17 13:07:39.593032] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:35.821 [2024-04-17 13:07:39.701952] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:35.821 [2024-04-17 13:07:39.702203] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:35.821 [2024-04-17 13:07:39.947615] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:35.821 [2024-04-17 13:07:39.948074] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.146 13:07:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.146 [2024-04-17 13:07:40.067174] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:36.146 [2024-04-17 13:07:40.067482] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:36.404 13:07:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.404 "name": "raid_bdev1", 00:23:36.404 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:36.404 "strip_size_kb": 0, 00:23:36.404 "state": "online", 00:23:36.404 "raid_level": "raid1", 00:23:36.404 "superblock": false, 00:23:36.404 "num_base_bdevs": 2, 00:23:36.404 "num_base_bdevs_discovered": 2, 00:23:36.404 "num_base_bdevs_operational": 2, 00:23:36.404 "process": { 00:23:36.404 "type": "rebuild", 00:23:36.404 "target": "spare", 00:23:36.404 "progress": { 00:23:36.404 "blocks": 18432, 00:23:36.404 "percent": 28 00:23:36.404 } 00:23:36.404 }, 00:23:36.404 "base_bdevs_list": [ 00:23:36.404 { 00:23:36.404 "name": "spare", 00:23:36.404 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:36.404 "is_configured": true, 00:23:36.404 "data_offset": 0, 00:23:36.404 "data_size": 65536 00:23:36.404 }, 00:23:36.404 { 00:23:36.404 "name": "BaseBdev2", 00:23:36.404 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:36.404 "is_configured": true, 00:23:36.404 "data_offset": 0, 00:23:36.404 "data_size": 65536 00:23:36.404 } 00:23:36.404 ] 00:23:36.404 }' 00:23:36.404 13:07:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.405 [2024-04-17 13:07:40.398334] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:36.405 [2024-04-17 13:07:40.398825] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@657 -- # local timeout=481 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.405 13:07:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.405 [2024-04-17 13:07:40.524899] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:36.701 13:07:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.701 "name": "raid_bdev1", 00:23:36.701 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:36.701 "strip_size_kb": 0, 00:23:36.701 "state": "online", 00:23:36.701 "raid_level": "raid1", 00:23:36.701 "superblock": false, 00:23:36.701 "num_base_bdevs": 2, 00:23:36.701 "num_base_bdevs_discovered": 2, 00:23:36.701 "num_base_bdevs_operational": 2, 00:23:36.701 "process": { 00:23:36.701 "type": "rebuild", 00:23:36.701 "target": "spare", 00:23:36.701 "progress": { 00:23:36.701 "blocks": 24576, 00:23:36.701 "percent": 37 00:23:36.701 } 00:23:36.701 }, 00:23:36.701 "base_bdevs_list": [ 00:23:36.701 { 00:23:36.701 "name": "spare", 00:23:36.701 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:36.701 "is_configured": true, 00:23:36.701 "data_offset": 0, 00:23:36.701 "data_size": 65536 00:23:36.701 }, 00:23:36.701 { 00:23:36.701 "name": "BaseBdev2", 00:23:36.701 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:36.701 "is_configured": true, 00:23:36.701 "data_offset": 0, 00:23:36.701 "data_size": 65536 00:23:36.701 } 00:23:36.701 ] 00:23:36.701 }' 00:23:36.701 13:07:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.701 13:07:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.701 13:07:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.979 13:07:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.979 13:07:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:37.282 [2024-04-17 13:07:41.173549] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:37.563 [2024-04-17 13:07:41.499940] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.822 13:07:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.822 [2024-04-17 13:07:41.833262] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:38.080 13:07:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.080 "name": "raid_bdev1", 00:23:38.080 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:38.080 "strip_size_kb": 0, 00:23:38.080 "state": "online", 00:23:38.080 "raid_level": "raid1", 00:23:38.080 "superblock": false, 00:23:38.080 "num_base_bdevs": 2, 00:23:38.080 "num_base_bdevs_discovered": 2, 00:23:38.080 "num_base_bdevs_operational": 2, 00:23:38.080 "process": { 00:23:38.081 "type": "rebuild", 00:23:38.081 "target": "spare", 00:23:38.081 "progress": { 00:23:38.081 "blocks": 49152, 00:23:38.081 "percent": 75 00:23:38.081 } 00:23:38.081 }, 00:23:38.081 "base_bdevs_list": [ 00:23:38.081 { 00:23:38.081 "name": "spare", 00:23:38.081 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:38.081 "is_configured": true, 00:23:38.081 "data_offset": 0, 00:23:38.081 "data_size": 65536 00:23:38.081 }, 00:23:38.081 { 00:23:38.081 "name": "BaseBdev2", 00:23:38.081 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:38.081 "is_configured": true, 00:23:38.081 "data_offset": 0, 00:23:38.081 "data_size": 65536 00:23:38.081 } 00:23:38.081 ] 00:23:38.081 }' 00:23:38.081 13:07:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.081 13:07:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.081 13:07:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.081 13:07:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.081 13:07:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:39.015 [2024-04-17 13:07:42.834119] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:39.015 [2024-04-17 13:07:42.942682] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:39.015 [2024-04-17 13:07:42.945345] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.273 13:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.531 "name": "raid_bdev1", 00:23:39.531 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:39.531 "strip_size_kb": 0, 00:23:39.531 "state": "online", 00:23:39.531 "raid_level": "raid1", 00:23:39.531 "superblock": false, 00:23:39.531 "num_base_bdevs": 2, 00:23:39.531 "num_base_bdevs_discovered": 2, 00:23:39.531 "num_base_bdevs_operational": 2, 00:23:39.531 "base_bdevs_list": [ 00:23:39.531 { 00:23:39.531 "name": "spare", 00:23:39.531 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:39.531 "is_configured": true, 00:23:39.531 "data_offset": 0, 00:23:39.531 "data_size": 65536 00:23:39.531 }, 00:23:39.531 { 00:23:39.531 "name": "BaseBdev2", 00:23:39.531 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:39.531 "is_configured": true, 00:23:39.531 "data_offset": 0, 00:23:39.531 "data_size": 65536 00:23:39.531 } 00:23:39.531 ] 00:23:39.531 }' 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@660 -- # break 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.531 13:07:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.789 "name": "raid_bdev1", 00:23:39.789 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:39.789 "strip_size_kb": 0, 00:23:39.789 "state": "online", 00:23:39.789 "raid_level": "raid1", 00:23:39.789 "superblock": false, 00:23:39.789 "num_base_bdevs": 2, 00:23:39.789 "num_base_bdevs_discovered": 2, 00:23:39.789 "num_base_bdevs_operational": 2, 00:23:39.789 "base_bdevs_list": [ 00:23:39.789 { 00:23:39.789 "name": "spare", 00:23:39.789 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:39.789 "is_configured": true, 00:23:39.789 "data_offset": 0, 00:23:39.789 "data_size": 65536 00:23:39.789 }, 00:23:39.789 { 00:23:39.789 "name": "BaseBdev2", 00:23:39.789 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:39.789 "is_configured": true, 00:23:39.789 "data_offset": 0, 00:23:39.789 "data_size": 65536 00:23:39.789 } 00:23:39.789 ] 00:23:39.789 }' 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.789 13:07:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.048 13:07:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.048 "name": "raid_bdev1", 00:23:40.048 "uuid": "a619199b-1095-4ec8-a3a0-cbe363436ddd", 00:23:40.048 "strip_size_kb": 0, 00:23:40.048 "state": "online", 00:23:40.048 "raid_level": "raid1", 00:23:40.048 "superblock": false, 00:23:40.048 "num_base_bdevs": 2, 00:23:40.048 "num_base_bdevs_discovered": 2, 00:23:40.048 "num_base_bdevs_operational": 2, 00:23:40.048 "base_bdevs_list": [ 00:23:40.048 { 00:23:40.048 "name": "spare", 00:23:40.048 "uuid": "5fbc1d29-e800-5c8f-972e-6180ef9a3b7c", 00:23:40.048 "is_configured": true, 00:23:40.048 "data_offset": 0, 00:23:40.048 "data_size": 65536 00:23:40.048 }, 00:23:40.048 { 00:23:40.048 "name": "BaseBdev2", 00:23:40.048 "uuid": "41bf4dd2-2db2-4ba1-ae5c-79d9c23f34fd", 00:23:40.048 "is_configured": true, 00:23:40.048 "data_offset": 0, 00:23:40.048 "data_size": 65536 00:23:40.048 } 00:23:40.048 ] 00:23:40.048 }' 00:23:40.048 13:07:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.048 13:07:44 -- common/autotest_common.sh@10 -- # set +x 00:23:40.625 13:07:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:40.882 [2024-04-17 13:07:44.983280] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:40.882 [2024-04-17 13:07:44.983314] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.882 00:23:40.882 Latency(us) 00:23:40.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.882 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:40.882 raid_bdev1 : 10.89 116.42 349.25 0.00 0.00 11486.34 329.54 109623.85 00:23:40.882 =================================================================================================================== 00:23:40.882 Total : 116.42 349.25 0.00 0.00 11486.34 329.54 109623.85 00:23:41.140 [2024-04-17 13:07:45.031257] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.140 [2024-04-17 13:07:45.031311] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:41.140 [2024-04-17 13:07:45.031407] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:41.140 0 00:23:41.140 [2024-04-17 13:07:45.031421] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:23:41.140 13:07:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.140 13:07:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:41.140 13:07:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:41.140 13:07:45 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:41.140 13:07:45 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@12 -- # local i 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.140 13:07:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:41.398 /dev/nbd0 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:41.398 13:07:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:41.398 13:07:45 -- common/autotest_common.sh@855 -- # local i 00:23:41.398 13:07:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:41.398 13:07:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:41.398 13:07:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:41.398 13:07:45 -- common/autotest_common.sh@859 -- # break 00:23:41.398 13:07:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:41.398 13:07:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:41.398 13:07:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:41.398 1+0 records in 00:23:41.398 1+0 records out 00:23:41.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212997 s, 19.2 MB/s 00:23:41.398 13:07:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.398 13:07:45 -- common/autotest_common.sh@872 -- # size=4096 00:23:41.398 13:07:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.398 13:07:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:41.398 13:07:45 -- common/autotest_common.sh@875 -- # return 0 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.398 13:07:45 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:41.398 13:07:45 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:23:41.398 13:07:45 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@12 -- # local i 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.398 13:07:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:41.656 /dev/nbd1 00:23:41.656 13:07:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:41.656 13:07:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:41.656 13:07:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:41.656 13:07:45 -- common/autotest_common.sh@855 -- # local i 00:23:41.656 13:07:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:41.656 13:07:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:41.656 13:07:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:41.656 13:07:45 -- common/autotest_common.sh@859 -- # break 00:23:41.656 13:07:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:41.656 13:07:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:41.656 13:07:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:41.656 1+0 records in 00:23:41.656 1+0 records out 00:23:41.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098108 s, 4.2 MB/s 00:23:41.656 13:07:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.656 13:07:45 -- common/autotest_common.sh@872 -- # size=4096 00:23:41.656 13:07:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:41.914 13:07:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:41.914 13:07:45 -- common/autotest_common.sh@875 -- # return 0 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:41.914 13:07:45 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:41.914 13:07:45 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@51 -- # local i 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:41.914 13:07:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@41 -- # break 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.174 13:07:46 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@51 -- # local i 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:42.174 13:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:42.432 13:07:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:42.689 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:42.689 13:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:42.689 13:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:42.689 13:07:46 -- bdev/nbd_common.sh@41 -- # break 00:23:42.689 13:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:23:42.689 13:07:46 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:42.689 13:07:46 -- bdev/bdev_raid.sh@709 -- # killprocess 131844 00:23:42.689 13:07:46 -- common/autotest_common.sh@924 -- # '[' -z 131844 ']' 00:23:42.689 13:07:46 -- common/autotest_common.sh@928 -- # kill -0 131844 00:23:42.689 13:07:46 -- common/autotest_common.sh@929 -- # uname 00:23:42.689 13:07:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:42.689 13:07:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 131844 00:23:42.689 13:07:46 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:42.689 13:07:46 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:42.689 13:07:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 131844' 00:23:42.689 killing process with pid 131844 00:23:42.689 13:07:46 -- common/autotest_common.sh@943 -- # kill 131844 00:23:42.689 Received shutdown signal, test time was about 12.556994 seconds 00:23:42.689 00:23:42.689 Latency(us) 00:23:42.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.689 =================================================================================================================== 00:23:42.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.690 13:07:46 -- common/autotest_common.sh@948 -- # wait 131844 00:23:42.690 [2024-04-17 13:07:46.680385] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:42.947 [2024-04-17 13:07:46.843315] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.882 13:07:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:43.882 00:23:43.882 real 0m18.054s 00:23:43.882 user 0m28.380s 00:23:43.882 sys 0m1.766s 00:23:43.882 13:07:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:23:43.882 13:07:47 -- common/autotest_common.sh@10 -- # set +x 00:23:43.882 ************************************ 00:23:43.882 END TEST raid_rebuild_test_io 00:23:43.882 ************************************ 00:23:43.882 13:07:48 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:23:43.882 13:07:48 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:23:43.882 13:07:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:43.882 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:44.140 ************************************ 00:23:44.140 START TEST raid_rebuild_test_sb_io 00:23:44.140 ************************************ 00:23:44.140 13:07:48 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 2 true true 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=132355 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132355 /var/tmp/spdk-raid.sock 00:23:44.140 13:07:48 -- common/autotest_common.sh@817 -- # '[' -z 132355 ']' 00:23:44.140 13:07:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:44.140 13:07:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:44.140 13:07:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:44.140 13:07:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:44.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:44.140 13:07:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:44.140 13:07:48 -- common/autotest_common.sh@10 -- # set +x 00:23:44.140 [2024-04-17 13:07:48.131830] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:23:44.140 [2024-04-17 13:07:48.132446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132355 ] 00:23:44.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:44.140 Zero copy mechanism will not be used. 00:23:44.398 [2024-04-17 13:07:48.304180] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.398 [2024-04-17 13:07:48.513315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.656 [2024-04-17 13:07:48.715885] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:45.222 13:07:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:45.222 13:07:49 -- common/autotest_common.sh@850 -- # return 0 00:23:45.222 13:07:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:45.222 13:07:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:45.222 13:07:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:45.222 BaseBdev1_malloc 00:23:45.222 13:07:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:45.480 [2024-04-17 13:07:49.568629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:45.481 [2024-04-17 13:07:49.568970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.481 [2024-04-17 13:07:49.569040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:45.481 [2024-04-17 13:07:49.569353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.481 [2024-04-17 13:07:49.572252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.481 [2024-04-17 13:07:49.572444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:45.481 BaseBdev1 00:23:45.481 13:07:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:45.481 13:07:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:45.481 13:07:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:46.047 BaseBdev2_malloc 00:23:46.048 13:07:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:46.048 [2024-04-17 13:07:50.138047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:46.048 [2024-04-17 13:07:50.138419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.048 [2024-04-17 13:07:50.138504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:46.048 [2024-04-17 13:07:50.138660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.048 [2024-04-17 13:07:50.141296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.048 [2024-04-17 13:07:50.141477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:46.048 BaseBdev2 00:23:46.048 13:07:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:46.306 spare_malloc 00:23:46.306 13:07:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:46.565 spare_delay 00:23:46.565 13:07:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:46.824 [2024-04-17 13:07:50.906031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:46.824 [2024-04-17 13:07:50.906316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.824 [2024-04-17 13:07:50.906501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:46.824 [2024-04-17 13:07:50.906685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.824 [2024-04-17 13:07:50.909427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.824 [2024-04-17 13:07:50.909603] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:46.824 spare 00:23:46.824 13:07:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:47.083 [2024-04-17 13:07:51.146224] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.083 [2024-04-17 13:07:51.148780] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.083 [2024-04-17 13:07:51.149172] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:47.083 [2024-04-17 13:07:51.149323] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:47.083 [2024-04-17 13:07:51.149538] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:47.083 [2024-04-17 13:07:51.150083] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:47.083 [2024-04-17 13:07:51.150233] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:47.083 [2024-04-17 13:07:51.150581] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.083 13:07:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.341 13:07:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.341 "name": "raid_bdev1", 00:23:47.341 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:47.341 "strip_size_kb": 0, 00:23:47.341 "state": "online", 00:23:47.341 "raid_level": "raid1", 00:23:47.341 "superblock": true, 00:23:47.341 "num_base_bdevs": 2, 00:23:47.341 "num_base_bdevs_discovered": 2, 00:23:47.341 "num_base_bdevs_operational": 2, 00:23:47.341 "base_bdevs_list": [ 00:23:47.341 { 00:23:47.341 "name": "BaseBdev1", 00:23:47.341 "uuid": "501e6a72-1e30-540e-96fa-e7de3ccc9dc5", 00:23:47.341 "is_configured": true, 00:23:47.341 "data_offset": 2048, 00:23:47.341 "data_size": 63488 00:23:47.341 }, 00:23:47.341 { 00:23:47.341 "name": "BaseBdev2", 00:23:47.341 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:47.341 "is_configured": true, 00:23:47.341 "data_offset": 2048, 00:23:47.341 "data_size": 63488 00:23:47.341 } 00:23:47.341 ] 00:23:47.341 }' 00:23:47.341 13:07:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.341 13:07:51 -- common/autotest_common.sh@10 -- # set +x 00:23:48.312 13:07:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:48.312 13:07:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:48.312 [2024-04-17 13:07:52.391064] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.312 13:07:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:48.312 13:07:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.312 13:07:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:48.571 13:07:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:48.571 13:07:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:48.571 13:07:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:48.571 13:07:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:48.829 [2024-04-17 13:07:52.738614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:48.829 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:48.829 Zero copy mechanism will not be used. 00:23:48.829 Running I/O for 60 seconds... 00:23:48.829 [2024-04-17 13:07:52.843050] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.830 [2024-04-17 13:07:52.850611] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.830 13:07:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.088 13:07:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.088 "name": "raid_bdev1", 00:23:49.088 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:49.088 "strip_size_kb": 0, 00:23:49.088 "state": "online", 00:23:49.088 "raid_level": "raid1", 00:23:49.088 "superblock": true, 00:23:49.088 "num_base_bdevs": 2, 00:23:49.088 "num_base_bdevs_discovered": 1, 00:23:49.088 "num_base_bdevs_operational": 1, 00:23:49.088 "base_bdevs_list": [ 00:23:49.088 { 00:23:49.088 "name": null, 00:23:49.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.088 "is_configured": false, 00:23:49.088 "data_offset": 2048, 00:23:49.088 "data_size": 63488 00:23:49.088 }, 00:23:49.088 { 00:23:49.088 "name": "BaseBdev2", 00:23:49.088 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:49.088 "is_configured": true, 00:23:49.088 "data_offset": 2048, 00:23:49.088 "data_size": 63488 00:23:49.088 } 00:23:49.088 ] 00:23:49.088 }' 00:23:49.088 13:07:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.088 13:07:53 -- common/autotest_common.sh@10 -- # set +x 00:23:50.025 13:07:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.298 [2024-04-17 13:07:54.182386] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:50.298 [2024-04-17 13:07:54.182700] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.298 13:07:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:50.299 [2024-04-17 13:07:54.249497] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:50.299 [2024-04-17 13:07:54.251875] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.299 [2024-04-17 13:07:54.376846] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:50.299 [2024-04-17 13:07:54.377677] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:50.559 [2024-04-17 13:07:54.596267] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:50.559 [2024-04-17 13:07:54.596775] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:50.818 [2024-04-17 13:07:54.927267] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:50.818 [2024-04-17 13:07:54.928124] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.385 13:07:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.385 "name": "raid_bdev1", 00:23:51.385 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:51.385 "strip_size_kb": 0, 00:23:51.385 "state": "online", 00:23:51.385 "raid_level": "raid1", 00:23:51.385 "superblock": true, 00:23:51.385 "num_base_bdevs": 2, 00:23:51.385 "num_base_bdevs_discovered": 2, 00:23:51.385 "num_base_bdevs_operational": 2, 00:23:51.385 "process": { 00:23:51.385 "type": "rebuild", 00:23:51.385 "target": "spare", 00:23:51.385 "progress": { 00:23:51.385 "blocks": 16384, 00:23:51.385 "percent": 25 00:23:51.385 } 00:23:51.385 }, 00:23:51.385 "base_bdevs_list": [ 00:23:51.385 { 00:23:51.385 "name": "spare", 00:23:51.385 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:51.385 "is_configured": true, 00:23:51.385 "data_offset": 2048, 00:23:51.385 "data_size": 63488 00:23:51.385 }, 00:23:51.385 { 00:23:51.385 "name": "BaseBdev2", 00:23:51.386 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:51.386 "is_configured": true, 00:23:51.386 "data_offset": 2048, 00:23:51.386 "data_size": 63488 00:23:51.386 } 00:23:51.386 ] 00:23:51.386 }' 00:23:51.386 13:07:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.386 13:07:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.386 13:07:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.644 13:07:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.644 13:07:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:51.644 [2024-04-17 13:07:55.655706] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:51.644 [2024-04-17 13:07:55.784450] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:51.902 [2024-04-17 13:07:55.831949] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.902 [2024-04-17 13:07:56.000569] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:51.902 [2024-04-17 13:07:56.010299] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.902 [2024-04-17 13:07:56.047336] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.160 "name": "raid_bdev1", 00:23:52.160 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:52.160 "strip_size_kb": 0, 00:23:52.160 "state": "online", 00:23:52.160 "raid_level": "raid1", 00:23:52.160 "superblock": true, 00:23:52.160 "num_base_bdevs": 2, 00:23:52.160 "num_base_bdevs_discovered": 1, 00:23:52.160 "num_base_bdevs_operational": 1, 00:23:52.160 "base_bdevs_list": [ 00:23:52.160 { 00:23:52.160 "name": null, 00:23:52.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.160 "is_configured": false, 00:23:52.160 "data_offset": 2048, 00:23:52.160 "data_size": 63488 00:23:52.160 }, 00:23:52.160 { 00:23:52.160 "name": "BaseBdev2", 00:23:52.160 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:52.160 "is_configured": true, 00:23:52.160 "data_offset": 2048, 00:23:52.160 "data_size": 63488 00:23:52.160 } 00:23:52.160 ] 00:23:52.160 }' 00:23:52.160 13:07:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.160 13:07:56 -- common/autotest_common.sh@10 -- # set +x 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.107 13:07:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.107 13:07:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.107 "name": "raid_bdev1", 00:23:53.107 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:53.107 "strip_size_kb": 0, 00:23:53.107 "state": "online", 00:23:53.107 "raid_level": "raid1", 00:23:53.107 "superblock": true, 00:23:53.107 "num_base_bdevs": 2, 00:23:53.107 "num_base_bdevs_discovered": 1, 00:23:53.107 "num_base_bdevs_operational": 1, 00:23:53.107 "base_bdevs_list": [ 00:23:53.107 { 00:23:53.107 "name": null, 00:23:53.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.107 "is_configured": false, 00:23:53.107 "data_offset": 2048, 00:23:53.107 "data_size": 63488 00:23:53.107 }, 00:23:53.107 { 00:23:53.107 "name": "BaseBdev2", 00:23:53.107 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:53.107 "is_configured": true, 00:23:53.107 "data_offset": 2048, 00:23:53.107 "data_size": 63488 00:23:53.107 } 00:23:53.107 ] 00:23:53.107 }' 00:23:53.107 13:07:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.107 13:07:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:53.107 13:07:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.374 13:07:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:53.374 13:07:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:53.633 [2024-04-17 13:07:57.526025] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:53.633 [2024-04-17 13:07:57.526327] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.633 13:07:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:53.633 [2024-04-17 13:07:57.594567] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:53.633 [2024-04-17 13:07:57.596695] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.633 [2024-04-17 13:07:57.704410] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:53.633 [2024-04-17 13:07:57.704886] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:53.891 [2024-04-17 13:07:57.916111] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:53.891 [2024-04-17 13:07:57.916556] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:54.149 [2024-04-17 13:07:58.287410] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:54.408 [2024-04-17 13:07:58.422741] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:54.408 [2024-04-17 13:07:58.423186] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.666 13:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.925 "name": "raid_bdev1", 00:23:54.925 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:54.925 "strip_size_kb": 0, 00:23:54.925 "state": "online", 00:23:54.925 "raid_level": "raid1", 00:23:54.925 "superblock": true, 00:23:54.925 "num_base_bdevs": 2, 00:23:54.925 "num_base_bdevs_discovered": 2, 00:23:54.925 "num_base_bdevs_operational": 2, 00:23:54.925 "process": { 00:23:54.925 "type": "rebuild", 00:23:54.925 "target": "spare", 00:23:54.925 "progress": { 00:23:54.925 "blocks": 14336, 00:23:54.925 "percent": 22 00:23:54.925 } 00:23:54.925 }, 00:23:54.925 "base_bdevs_list": [ 00:23:54.925 { 00:23:54.925 "name": "spare", 00:23:54.925 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:54.925 "is_configured": true, 00:23:54.925 "data_offset": 2048, 00:23:54.925 "data_size": 63488 00:23:54.925 }, 00:23:54.925 { 00:23:54.925 "name": "BaseBdev2", 00:23:54.925 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:54.925 "is_configured": true, 00:23:54.925 "data_offset": 2048, 00:23:54.925 "data_size": 63488 00:23:54.925 } 00:23:54.925 ] 00:23:54.925 }' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:54.925 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@657 -- # local timeout=499 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.925 13:07:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.183 [2024-04-17 13:07:59.115328] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.183 "name": "raid_bdev1", 00:23:55.183 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:55.183 "strip_size_kb": 0, 00:23:55.183 "state": "online", 00:23:55.183 "raid_level": "raid1", 00:23:55.183 "superblock": true, 00:23:55.183 "num_base_bdevs": 2, 00:23:55.183 "num_base_bdevs_discovered": 2, 00:23:55.183 "num_base_bdevs_operational": 2, 00:23:55.183 "process": { 00:23:55.183 "type": "rebuild", 00:23:55.183 "target": "spare", 00:23:55.183 "progress": { 00:23:55.183 "blocks": 20480, 00:23:55.183 "percent": 32 00:23:55.183 } 00:23:55.183 }, 00:23:55.183 "base_bdevs_list": [ 00:23:55.183 { 00:23:55.183 "name": "spare", 00:23:55.183 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:55.183 "is_configured": true, 00:23:55.183 "data_offset": 2048, 00:23:55.183 "data_size": 63488 00:23:55.183 }, 00:23:55.183 { 00:23:55.183 "name": "BaseBdev2", 00:23:55.183 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:55.183 "is_configured": true, 00:23:55.183 "data_offset": 2048, 00:23:55.183 "data_size": 63488 00:23:55.183 } 00:23:55.183 ] 00:23:55.183 }' 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.183 13:07:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.441 [2024-04-17 13:07:59.334690] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:56.008 [2024-04-17 13:08:00.041860] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:56.266 [2024-04-17 13:08:00.254565] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:56.266 [2024-04-17 13:08:00.254998] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.266 13:08:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.525 13:08:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.525 "name": "raid_bdev1", 00:23:56.525 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:56.525 "strip_size_kb": 0, 00:23:56.525 "state": "online", 00:23:56.525 "raid_level": "raid1", 00:23:56.525 "superblock": true, 00:23:56.525 "num_base_bdevs": 2, 00:23:56.525 "num_base_bdevs_discovered": 2, 00:23:56.525 "num_base_bdevs_operational": 2, 00:23:56.525 "process": { 00:23:56.525 "type": "rebuild", 00:23:56.525 "target": "spare", 00:23:56.525 "progress": { 00:23:56.525 "blocks": 38912, 00:23:56.525 "percent": 61 00:23:56.525 } 00:23:56.525 }, 00:23:56.525 "base_bdevs_list": [ 00:23:56.525 { 00:23:56.525 "name": "spare", 00:23:56.525 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:56.525 "is_configured": true, 00:23:56.525 "data_offset": 2048, 00:23:56.525 "data_size": 63488 00:23:56.525 }, 00:23:56.525 { 00:23:56.525 "name": "BaseBdev2", 00:23:56.525 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:56.525 "is_configured": true, 00:23:56.525 "data_offset": 2048, 00:23:56.525 "data_size": 63488 00:23:56.525 } 00:23:56.525 ] 00:23:56.525 }' 00:23:56.525 13:08:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.525 13:08:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.525 13:08:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.783 13:08:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.783 13:08:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:57.040 [2024-04-17 13:08:00.956480] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:57.297 [2024-04-17 13:08:01.284382] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.555 13:08:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.813 [2024-04-17 13:08:01.837279] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:57.813 13:08:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.813 "name": "raid_bdev1", 00:23:57.813 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:57.813 "strip_size_kb": 0, 00:23:57.813 "state": "online", 00:23:57.813 "raid_level": "raid1", 00:23:57.813 "superblock": true, 00:23:57.813 "num_base_bdevs": 2, 00:23:57.813 "num_base_bdevs_discovered": 2, 00:23:57.813 "num_base_bdevs_operational": 2, 00:23:57.813 "process": { 00:23:57.813 "type": "rebuild", 00:23:57.813 "target": "spare", 00:23:57.813 "progress": { 00:23:57.813 "blocks": 63488, 00:23:57.813 "percent": 100 00:23:57.813 } 00:23:57.813 }, 00:23:57.813 "base_bdevs_list": [ 00:23:57.813 { 00:23:57.813 "name": "spare", 00:23:57.813 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:57.813 "is_configured": true, 00:23:57.813 "data_offset": 2048, 00:23:57.813 "data_size": 63488 00:23:57.813 }, 00:23:57.813 { 00:23:57.813 "name": "BaseBdev2", 00:23:57.813 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:57.813 "is_configured": true, 00:23:57.813 "data_offset": 2048, 00:23:57.813 "data_size": 63488 00:23:57.813 } 00:23:57.813 ] 00:23:57.813 }' 00:23:57.813 13:08:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.813 [2024-04-17 13:08:01.937252] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:57.813 [2024-04-17 13:08:01.947532] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.072 13:08:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.072 13:08:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.072 13:08:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.072 13:08:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.006 13:08:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.264 13:08:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:59.264 "name": "raid_bdev1", 00:23:59.264 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:59.264 "strip_size_kb": 0, 00:23:59.264 "state": "online", 00:23:59.264 "raid_level": "raid1", 00:23:59.264 "superblock": true, 00:23:59.264 "num_base_bdevs": 2, 00:23:59.264 "num_base_bdevs_discovered": 2, 00:23:59.264 "num_base_bdevs_operational": 2, 00:23:59.264 "base_bdevs_list": [ 00:23:59.264 { 00:23:59.264 "name": "spare", 00:23:59.264 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:59.264 "is_configured": true, 00:23:59.264 "data_offset": 2048, 00:23:59.264 "data_size": 63488 00:23:59.264 }, 00:23:59.264 { 00:23:59.264 "name": "BaseBdev2", 00:23:59.264 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:59.264 "is_configured": true, 00:23:59.264 "data_offset": 2048, 00:23:59.264 "data_size": 63488 00:23:59.264 } 00:23:59.264 ] 00:23:59.264 }' 00:23:59.264 13:08:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:59.264 13:08:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:59.264 13:08:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@660 -- # break 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.548 13:08:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.825 13:08:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:59.825 "name": "raid_bdev1", 00:23:59.825 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:23:59.825 "strip_size_kb": 0, 00:23:59.825 "state": "online", 00:23:59.825 "raid_level": "raid1", 00:23:59.825 "superblock": true, 00:23:59.825 "num_base_bdevs": 2, 00:23:59.825 "num_base_bdevs_discovered": 2, 00:23:59.825 "num_base_bdevs_operational": 2, 00:23:59.825 "base_bdevs_list": [ 00:23:59.825 { 00:23:59.825 "name": "spare", 00:23:59.825 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:23:59.825 "is_configured": true, 00:23:59.825 "data_offset": 2048, 00:23:59.825 "data_size": 63488 00:23:59.825 }, 00:23:59.825 { 00:23:59.825 "name": "BaseBdev2", 00:23:59.826 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:23:59.826 "is_configured": true, 00:23:59.826 "data_offset": 2048, 00:23:59.826 "data_size": 63488 00:23:59.826 } 00:23:59.826 ] 00:23:59.826 }' 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.826 13:08:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.083 13:08:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.083 "name": "raid_bdev1", 00:24:00.083 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:24:00.083 "strip_size_kb": 0, 00:24:00.083 "state": "online", 00:24:00.083 "raid_level": "raid1", 00:24:00.083 "superblock": true, 00:24:00.083 "num_base_bdevs": 2, 00:24:00.083 "num_base_bdevs_discovered": 2, 00:24:00.083 "num_base_bdevs_operational": 2, 00:24:00.083 "base_bdevs_list": [ 00:24:00.083 { 00:24:00.083 "name": "spare", 00:24:00.083 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:24:00.083 "is_configured": true, 00:24:00.083 "data_offset": 2048, 00:24:00.083 "data_size": 63488 00:24:00.083 }, 00:24:00.083 { 00:24:00.083 "name": "BaseBdev2", 00:24:00.083 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:24:00.083 "is_configured": true, 00:24:00.083 "data_offset": 2048, 00:24:00.083 "data_size": 63488 00:24:00.083 } 00:24:00.083 ] 00:24:00.083 }' 00:24:00.083 13:08:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.083 13:08:04 -- common/autotest_common.sh@10 -- # set +x 00:24:00.649 13:08:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:01.217 [2024-04-17 13:08:05.056528] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.217 [2024-04-17 13:08:05.056833] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:01.217 00:24:01.217 Latency(us) 00:24:01.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.217 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:01.217 raid_bdev1 : 12.40 99.09 297.26 0.00 0.00 14442.46 344.44 115819.99 00:24:01.217 =================================================================================================================== 00:24:01.217 Total : 99.09 297.26 0.00 0.00 14442.46 344.44 115819.99 00:24:01.217 [2024-04-17 13:08:05.163825] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.217 [2024-04-17 13:08:05.164023] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.217 0 00:24:01.217 [2024-04-17 13:08:05.164153] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.217 [2024-04-17 13:08:05.164170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:01.217 13:08:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.217 13:08:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:01.475 13:08:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:01.475 13:08:05 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:24:01.475 13:08:05 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@12 -- # local i 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.475 13:08:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:01.732 /dev/nbd0 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:01.732 13:08:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:01.732 13:08:05 -- common/autotest_common.sh@855 -- # local i 00:24:01.732 13:08:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:01.732 13:08:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:01.732 13:08:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:01.732 13:08:05 -- common/autotest_common.sh@859 -- # break 00:24:01.732 13:08:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:01.732 13:08:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:01.732 13:08:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.732 1+0 records in 00:24:01.732 1+0 records out 00:24:01.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509655 s, 8.0 MB/s 00:24:01.732 13:08:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.732 13:08:05 -- common/autotest_common.sh@872 -- # size=4096 00:24:01.732 13:08:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.732 13:08:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:01.732 13:08:05 -- common/autotest_common.sh@875 -- # return 0 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.732 13:08:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:01.732 13:08:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:24:01.732 13:08:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@12 -- # local i 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.732 13:08:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:01.989 /dev/nbd1 00:24:01.990 13:08:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:01.990 13:08:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:01.990 13:08:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:01.990 13:08:06 -- common/autotest_common.sh@855 -- # local i 00:24:01.990 13:08:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:01.990 13:08:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:01.990 13:08:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:01.990 13:08:06 -- common/autotest_common.sh@859 -- # break 00:24:01.990 13:08:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:01.990 13:08:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:01.990 13:08:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:01.990 1+0 records in 00:24:01.990 1+0 records out 00:24:01.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515506 s, 7.9 MB/s 00:24:01.990 13:08:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.990 13:08:06 -- common/autotest_common.sh@872 -- # size=4096 00:24:01.990 13:08:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:01.990 13:08:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:01.990 13:08:06 -- common/autotest_common.sh@875 -- # return 0 00:24:01.990 13:08:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:01.990 13:08:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:01.990 13:08:06 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:02.247 13:08:06 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@51 -- # local i 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:02.247 13:08:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@41 -- # break 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:02.503 13:08:06 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@51 -- # local i 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:02.503 13:08:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.068 13:08:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:03.068 13:08:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:03.068 13:08:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.068 13:08:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.068 13:08:07 -- bdev/nbd_common.sh@41 -- # break 00:24:03.068 13:08:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.068 13:08:07 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:03.068 13:08:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:03.068 13:08:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:03.068 13:08:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:03.326 13:08:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:03.583 [2024-04-17 13:08:07.555864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:03.583 [2024-04-17 13:08:07.556131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.583 [2024-04-17 13:08:07.556205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:03.583 [2024-04-17 13:08:07.556431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.583 [2024-04-17 13:08:07.558698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.583 [2024-04-17 13:08:07.558869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.584 [2024-04-17 13:08:07.559103] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:03.584 [2024-04-17 13:08:07.559328] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.584 BaseBdev1 00:24:03.584 13:08:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:03.584 13:08:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:03.584 13:08:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:03.841 13:08:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:04.099 [2024-04-17 13:08:08.048021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:04.099 [2024-04-17 13:08:08.048246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.099 [2024-04-17 13:08:08.048378] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:04.099 [2024-04-17 13:08:08.048491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.099 [2024-04-17 13:08:08.048996] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.099 [2024-04-17 13:08:08.049154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:04.099 [2024-04-17 13:08:08.049344] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:04.099 [2024-04-17 13:08:08.049438] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:04.099 [2024-04-17 13:08:08.049536] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.099 [2024-04-17 13:08:08.049593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:24:04.099 [2024-04-17 13:08:08.049781] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:04.099 BaseBdev2 00:24:04.099 13:08:08 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:04.356 [2024-04-17 13:08:08.444167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:04.356 [2024-04-17 13:08:08.444369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.356 [2024-04-17 13:08:08.444494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:04.356 [2024-04-17 13:08:08.444609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.356 [2024-04-17 13:08:08.445066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.356 [2024-04-17 13:08:08.445210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:04.356 [2024-04-17 13:08:08.445391] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:04.356 [2024-04-17 13:08:08.445512] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.356 spare 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.356 13:08:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.613 [2024-04-17 13:08:08.545682] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:24:04.613 [2024-04-17 13:08:08.545828] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:04.614 [2024-04-17 13:08:08.545992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d080 00:24:04.614 [2024-04-17 13:08:08.546473] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:24:04.614 [2024-04-17 13:08:08.546638] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:24:04.614 [2024-04-17 13:08:08.546855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.614 13:08:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.614 "name": "raid_bdev1", 00:24:04.614 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:24:04.614 "strip_size_kb": 0, 00:24:04.614 "state": "online", 00:24:04.614 "raid_level": "raid1", 00:24:04.614 "superblock": true, 00:24:04.614 "num_base_bdevs": 2, 00:24:04.614 "num_base_bdevs_discovered": 2, 00:24:04.614 "num_base_bdevs_operational": 2, 00:24:04.614 "base_bdevs_list": [ 00:24:04.614 { 00:24:04.614 "name": "spare", 00:24:04.614 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:24:04.614 "is_configured": true, 00:24:04.614 "data_offset": 2048, 00:24:04.614 "data_size": 63488 00:24:04.614 }, 00:24:04.614 { 00:24:04.614 "name": "BaseBdev2", 00:24:04.614 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:24:04.614 "is_configured": true, 00:24:04.614 "data_offset": 2048, 00:24:04.614 "data_size": 63488 00:24:04.614 } 00:24:04.614 ] 00:24:04.614 }' 00:24:04.614 13:08:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.614 13:08:08 -- common/autotest_common.sh@10 -- # set +x 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.548 13:08:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.805 "name": "raid_bdev1", 00:24:05.805 "uuid": "e502c561-a9f2-497a-b93c-a1c978f1128c", 00:24:05.805 "strip_size_kb": 0, 00:24:05.805 "state": "online", 00:24:05.805 "raid_level": "raid1", 00:24:05.805 "superblock": true, 00:24:05.805 "num_base_bdevs": 2, 00:24:05.805 "num_base_bdevs_discovered": 2, 00:24:05.805 "num_base_bdevs_operational": 2, 00:24:05.805 "base_bdevs_list": [ 00:24:05.805 { 00:24:05.805 "name": "spare", 00:24:05.805 "uuid": "5bffeb81-0173-5f9d-bff7-701eba1edf9f", 00:24:05.805 "is_configured": true, 00:24:05.805 "data_offset": 2048, 00:24:05.805 "data_size": 63488 00:24:05.805 }, 00:24:05.805 { 00:24:05.805 "name": "BaseBdev2", 00:24:05.805 "uuid": "ed4c948b-966e-544c-a558-78d61ed5769e", 00:24:05.805 "is_configured": true, 00:24:05.805 "data_offset": 2048, 00:24:05.805 "data_size": 63488 00:24:05.805 } 00:24:05.805 ] 00:24:05.805 }' 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.805 13:08:09 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:06.090 13:08:10 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.090 13:08:10 -- bdev/bdev_raid.sh@709 -- # killprocess 132355 00:24:06.090 13:08:10 -- common/autotest_common.sh@924 -- # '[' -z 132355 ']' 00:24:06.090 13:08:10 -- common/autotest_common.sh@928 -- # kill -0 132355 00:24:06.090 13:08:10 -- common/autotest_common.sh@929 -- # uname 00:24:06.090 13:08:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:06.090 13:08:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 132355 00:24:06.090 killing process with pid 132355 00:24:06.090 Received shutdown signal, test time was about 17.434813 seconds 00:24:06.090 00:24:06.090 Latency(us) 00:24:06.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.091 =================================================================================================================== 00:24:06.091 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.091 13:08:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:06.091 13:08:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:06.091 13:08:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 132355' 00:24:06.091 13:08:10 -- common/autotest_common.sh@943 -- # kill 132355 00:24:06.091 13:08:10 -- common/autotest_common.sh@948 -- # wait 132355 00:24:06.091 [2024-04-17 13:08:10.176031] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:06.091 [2024-04-17 13:08:10.176137] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.091 [2024-04-17 13:08:10.176235] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.091 [2024-04-17 13:08:10.176401] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:24:06.350 [2024-04-17 13:08:10.365044] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:07.728 ************************************ 00:24:07.728 END TEST raid_rebuild_test_sb_io 00:24:07.728 ************************************ 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:07.728 00:24:07.728 real 0m23.462s 00:24:07.728 user 0m37.675s 00:24:07.728 sys 0m2.406s 00:24:07.728 13:08:11 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:24:07.728 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:24:07.728 13:08:11 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:24:07.728 13:08:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:07.728 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.728 ************************************ 00:24:07.728 START TEST raid_rebuild_test 00:24:07.728 ************************************ 00:24:07.728 13:08:11 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 4 false false 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:07.728 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:07.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=132992 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132992 /var/tmp/spdk-raid.sock 00:24:07.729 13:08:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:07.729 13:08:11 -- common/autotest_common.sh@817 -- # '[' -z 132992 ']' 00:24:07.729 13:08:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:07.729 13:08:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:07.729 13:08:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:07.729 13:08:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:07.729 13:08:11 -- common/autotest_common.sh@10 -- # set +x 00:24:07.729 [2024-04-17 13:08:11.680919] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:24:07.729 [2024-04-17 13:08:11.681358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132992 ] 00:24:07.729 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:07.729 Zero copy mechanism will not be used. 00:24:07.729 [2024-04-17 13:08:11.845900] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.988 [2024-04-17 13:08:12.039439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.310 [2024-04-17 13:08:12.230815] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.588 13:08:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.588 13:08:12 -- common/autotest_common.sh@850 -- # return 0 00:24:08.588 13:08:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:08.588 13:08:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:08.588 13:08:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:08.848 BaseBdev1 00:24:08.848 13:08:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:08.848 13:08:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:08.848 13:08:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:09.416 BaseBdev2 00:24:09.416 13:08:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:09.416 13:08:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:09.416 13:08:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:09.416 BaseBdev3 00:24:09.675 13:08:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:09.675 13:08:13 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:09.675 13:08:13 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:09.934 BaseBdev4 00:24:09.934 13:08:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:10.194 spare_malloc 00:24:10.194 13:08:14 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:10.453 spare_delay 00:24:10.453 13:08:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:10.712 [2024-04-17 13:08:14.690958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:10.712 [2024-04-17 13:08:14.691403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.712 [2024-04-17 13:08:14.691574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:10.712 [2024-04-17 13:08:14.691738] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.712 [2024-04-17 13:08:14.694247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.712 [2024-04-17 13:08:14.694428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:10.712 spare 00:24:10.712 13:08:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:10.971 [2024-04-17 13:08:14.967261] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:10.971 [2024-04-17 13:08:14.969542] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:10.971 [2024-04-17 13:08:14.969723] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:10.971 [2024-04-17 13:08:14.969805] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:10.971 [2024-04-17 13:08:14.969991] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:10.971 [2024-04-17 13:08:14.970095] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:10.971 [2024-04-17 13:08:14.970393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:24:10.971 [2024-04-17 13:08:14.970872] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:10.971 [2024-04-17 13:08:14.970994] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:24:10.971 [2024-04-17 13:08:14.971325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.971 13:08:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.230 13:08:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:11.230 "name": "raid_bdev1", 00:24:11.230 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:11.230 "strip_size_kb": 0, 00:24:11.230 "state": "online", 00:24:11.230 "raid_level": "raid1", 00:24:11.230 "superblock": false, 00:24:11.230 "num_base_bdevs": 4, 00:24:11.230 "num_base_bdevs_discovered": 4, 00:24:11.230 "num_base_bdevs_operational": 4, 00:24:11.230 "base_bdevs_list": [ 00:24:11.230 { 00:24:11.230 "name": "BaseBdev1", 00:24:11.230 "uuid": "8bc6c4bb-7130-49b9-9513-1dac386330b6", 00:24:11.230 "is_configured": true, 00:24:11.230 "data_offset": 0, 00:24:11.230 "data_size": 65536 00:24:11.230 }, 00:24:11.230 { 00:24:11.230 "name": "BaseBdev2", 00:24:11.230 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:11.230 "is_configured": true, 00:24:11.230 "data_offset": 0, 00:24:11.230 "data_size": 65536 00:24:11.230 }, 00:24:11.230 { 00:24:11.230 "name": "BaseBdev3", 00:24:11.230 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:11.230 "is_configured": true, 00:24:11.230 "data_offset": 0, 00:24:11.230 "data_size": 65536 00:24:11.230 }, 00:24:11.230 { 00:24:11.230 "name": "BaseBdev4", 00:24:11.230 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:11.230 "is_configured": true, 00:24:11.230 "data_offset": 0, 00:24:11.230 "data_size": 65536 00:24:11.230 } 00:24:11.230 ] 00:24:11.230 }' 00:24:11.230 13:08:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:11.230 13:08:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.798 13:08:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:11.798 13:08:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:12.057 [2024-04-17 13:08:16.143960] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.057 13:08:16 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:24:12.057 13:08:16 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.057 13:08:16 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.316 13:08:16 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:12.316 13:08:16 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:12.316 13:08:16 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:12.316 13:08:16 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@12 -- # local i 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.316 13:08:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:12.576 [2024-04-17 13:08:16.695800] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:12.576 /dev/nbd0 00:24:12.835 13:08:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:12.835 13:08:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:12.835 13:08:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:12.835 13:08:16 -- common/autotest_common.sh@855 -- # local i 00:24:12.835 13:08:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:12.835 13:08:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:12.835 13:08:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:12.835 13:08:16 -- common/autotest_common.sh@859 -- # break 00:24:12.835 13:08:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:12.835 13:08:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:12.835 13:08:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:12.835 1+0 records in 00:24:12.835 1+0 records out 00:24:12.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495637 s, 8.3 MB/s 00:24:12.835 13:08:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.835 13:08:16 -- common/autotest_common.sh@872 -- # size=4096 00:24:12.835 13:08:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:12.835 13:08:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:12.835 13:08:16 -- common/autotest_common.sh@875 -- # return 0 00:24:12.835 13:08:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:12.835 13:08:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:12.835 13:08:16 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:12.835 13:08:16 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:12.835 13:08:16 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:19.401 65536+0 records in 00:24:19.401 65536+0 records out 00:24:19.401 33554432 bytes (34 MB, 32 MiB) copied, 6.0734 s, 5.5 MB/s 00:24:19.401 13:08:22 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@51 -- # local i 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.401 13:08:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:19.401 [2024-04-17 13:08:23.069521] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@41 -- # break 00:24:19.401 13:08:23 -- bdev/nbd_common.sh@45 -- # return 0 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:19.401 [2024-04-17 13:08:23.317202] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.401 13:08:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.671 13:08:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.671 "name": "raid_bdev1", 00:24:19.671 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:19.671 "strip_size_kb": 0, 00:24:19.671 "state": "online", 00:24:19.671 "raid_level": "raid1", 00:24:19.671 "superblock": false, 00:24:19.671 "num_base_bdevs": 4, 00:24:19.671 "num_base_bdevs_discovered": 3, 00:24:19.671 "num_base_bdevs_operational": 3, 00:24:19.671 "base_bdevs_list": [ 00:24:19.671 { 00:24:19.671 "name": null, 00:24:19.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.671 "is_configured": false, 00:24:19.671 "data_offset": 0, 00:24:19.671 "data_size": 65536 00:24:19.671 }, 00:24:19.671 { 00:24:19.671 "name": "BaseBdev2", 00:24:19.671 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:19.671 "is_configured": true, 00:24:19.671 "data_offset": 0, 00:24:19.671 "data_size": 65536 00:24:19.671 }, 00:24:19.671 { 00:24:19.671 "name": "BaseBdev3", 00:24:19.671 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:19.671 "is_configured": true, 00:24:19.671 "data_offset": 0, 00:24:19.671 "data_size": 65536 00:24:19.671 }, 00:24:19.671 { 00:24:19.671 "name": "BaseBdev4", 00:24:19.671 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:19.671 "is_configured": true, 00:24:19.671 "data_offset": 0, 00:24:19.671 "data_size": 65536 00:24:19.671 } 00:24:19.671 ] 00:24:19.671 }' 00:24:19.671 13:08:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.671 13:08:23 -- common/autotest_common.sh@10 -- # set +x 00:24:20.251 13:08:24 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:20.509 [2024-04-17 13:08:24.525657] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:20.509 [2024-04-17 13:08:24.526015] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.509 [2024-04-17 13:08:24.539358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:24:20.509 [2024-04-17 13:08:24.541762] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.509 13:08:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.446 13:08:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.705 13:08:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:21.705 "name": "raid_bdev1", 00:24:21.705 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:21.705 "strip_size_kb": 0, 00:24:21.705 "state": "online", 00:24:21.705 "raid_level": "raid1", 00:24:21.705 "superblock": false, 00:24:21.705 "num_base_bdevs": 4, 00:24:21.705 "num_base_bdevs_discovered": 4, 00:24:21.705 "num_base_bdevs_operational": 4, 00:24:21.705 "process": { 00:24:21.705 "type": "rebuild", 00:24:21.705 "target": "spare", 00:24:21.705 "progress": { 00:24:21.705 "blocks": 24576, 00:24:21.705 "percent": 37 00:24:21.705 } 00:24:21.705 }, 00:24:21.705 "base_bdevs_list": [ 00:24:21.705 { 00:24:21.705 "name": "spare", 00:24:21.705 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:21.705 "is_configured": true, 00:24:21.705 "data_offset": 0, 00:24:21.705 "data_size": 65536 00:24:21.705 }, 00:24:21.705 { 00:24:21.705 "name": "BaseBdev2", 00:24:21.705 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:21.705 "is_configured": true, 00:24:21.705 "data_offset": 0, 00:24:21.705 "data_size": 65536 00:24:21.705 }, 00:24:21.705 { 00:24:21.706 "name": "BaseBdev3", 00:24:21.706 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:21.706 "is_configured": true, 00:24:21.706 "data_offset": 0, 00:24:21.706 "data_size": 65536 00:24:21.706 }, 00:24:21.706 { 00:24:21.706 "name": "BaseBdev4", 00:24:21.706 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:21.706 "is_configured": true, 00:24:21.706 "data_offset": 0, 00:24:21.706 "data_size": 65536 00:24:21.706 } 00:24:21.706 ] 00:24:21.706 }' 00:24:21.706 13:08:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:21.965 13:08:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.965 13:08:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.965 13:08:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.965 13:08:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:22.224 [2024-04-17 13:08:26.212300] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:22.224 [2024-04-17 13:08:26.253598] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:22.224 [2024-04-17 13:08:26.253928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.224 13:08:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.483 13:08:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.483 "name": "raid_bdev1", 00:24:22.483 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:22.483 "strip_size_kb": 0, 00:24:22.483 "state": "online", 00:24:22.483 "raid_level": "raid1", 00:24:22.483 "superblock": false, 00:24:22.483 "num_base_bdevs": 4, 00:24:22.483 "num_base_bdevs_discovered": 3, 00:24:22.483 "num_base_bdevs_operational": 3, 00:24:22.483 "base_bdevs_list": [ 00:24:22.483 { 00:24:22.483 "name": null, 00:24:22.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.483 "is_configured": false, 00:24:22.483 "data_offset": 0, 00:24:22.483 "data_size": 65536 00:24:22.483 }, 00:24:22.483 { 00:24:22.483 "name": "BaseBdev2", 00:24:22.483 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:22.483 "is_configured": true, 00:24:22.483 "data_offset": 0, 00:24:22.483 "data_size": 65536 00:24:22.483 }, 00:24:22.483 { 00:24:22.483 "name": "BaseBdev3", 00:24:22.483 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:22.483 "is_configured": true, 00:24:22.483 "data_offset": 0, 00:24:22.483 "data_size": 65536 00:24:22.483 }, 00:24:22.483 { 00:24:22.483 "name": "BaseBdev4", 00:24:22.483 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:22.483 "is_configured": true, 00:24:22.483 "data_offset": 0, 00:24:22.483 "data_size": 65536 00:24:22.483 } 00:24:22.483 ] 00:24:22.483 }' 00:24:22.483 13:08:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.483 13:08:26 -- common/autotest_common.sh@10 -- # set +x 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.050 13:08:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.309 13:08:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:23.309 "name": "raid_bdev1", 00:24:23.309 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:23.309 "strip_size_kb": 0, 00:24:23.309 "state": "online", 00:24:23.309 "raid_level": "raid1", 00:24:23.309 "superblock": false, 00:24:23.309 "num_base_bdevs": 4, 00:24:23.309 "num_base_bdevs_discovered": 3, 00:24:23.309 "num_base_bdevs_operational": 3, 00:24:23.309 "base_bdevs_list": [ 00:24:23.309 { 00:24:23.309 "name": null, 00:24:23.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.309 "is_configured": false, 00:24:23.309 "data_offset": 0, 00:24:23.309 "data_size": 65536 00:24:23.310 }, 00:24:23.310 { 00:24:23.310 "name": "BaseBdev2", 00:24:23.310 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:23.310 "is_configured": true, 00:24:23.310 "data_offset": 0, 00:24:23.310 "data_size": 65536 00:24:23.310 }, 00:24:23.310 { 00:24:23.310 "name": "BaseBdev3", 00:24:23.310 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:23.310 "is_configured": true, 00:24:23.310 "data_offset": 0, 00:24:23.310 "data_size": 65536 00:24:23.310 }, 00:24:23.310 { 00:24:23.310 "name": "BaseBdev4", 00:24:23.310 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:23.310 "is_configured": true, 00:24:23.310 "data_offset": 0, 00:24:23.310 "data_size": 65536 00:24:23.310 } 00:24:23.310 ] 00:24:23.310 }' 00:24:23.310 13:08:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:23.569 13:08:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:23.569 13:08:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:23.569 13:08:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:23.569 13:08:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:23.827 [2024-04-17 13:08:27.788194] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:23.827 [2024-04-17 13:08:27.788392] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.827 [2024-04-17 13:08:27.800726] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:24:23.827 [2024-04-17 13:08:27.802955] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:23.827 13:08:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.762 13:08:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.021 13:08:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.021 "name": "raid_bdev1", 00:24:25.021 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:25.021 "strip_size_kb": 0, 00:24:25.021 "state": "online", 00:24:25.021 "raid_level": "raid1", 00:24:25.021 "superblock": false, 00:24:25.021 "num_base_bdevs": 4, 00:24:25.021 "num_base_bdevs_discovered": 4, 00:24:25.021 "num_base_bdevs_operational": 4, 00:24:25.021 "process": { 00:24:25.021 "type": "rebuild", 00:24:25.021 "target": "spare", 00:24:25.021 "progress": { 00:24:25.021 "blocks": 24576, 00:24:25.021 "percent": 37 00:24:25.021 } 00:24:25.021 }, 00:24:25.021 "base_bdevs_list": [ 00:24:25.021 { 00:24:25.021 "name": "spare", 00:24:25.021 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:25.021 "is_configured": true, 00:24:25.021 "data_offset": 0, 00:24:25.021 "data_size": 65536 00:24:25.021 }, 00:24:25.021 { 00:24:25.021 "name": "BaseBdev2", 00:24:25.021 "uuid": "eb31de82-f82f-4e18-8c84-9bbcd277a892", 00:24:25.021 "is_configured": true, 00:24:25.021 "data_offset": 0, 00:24:25.021 "data_size": 65536 00:24:25.021 }, 00:24:25.021 { 00:24:25.021 "name": "BaseBdev3", 00:24:25.021 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:25.021 "is_configured": true, 00:24:25.021 "data_offset": 0, 00:24:25.021 "data_size": 65536 00:24:25.021 }, 00:24:25.021 { 00:24:25.021 "name": "BaseBdev4", 00:24:25.021 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:25.021 "is_configured": true, 00:24:25.021 "data_offset": 0, 00:24:25.021 "data_size": 65536 00:24:25.021 } 00:24:25.021 ] 00:24:25.021 }' 00:24:25.021 13:08:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.021 13:08:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.021 13:08:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:25.279 13:08:29 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:25.537 [2024-04-17 13:08:29.425156] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:25.537 [2024-04-17 13:08:29.513141] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.537 13:08:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.803 13:08:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.803 "name": "raid_bdev1", 00:24:25.803 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:25.803 "strip_size_kb": 0, 00:24:25.803 "state": "online", 00:24:25.803 "raid_level": "raid1", 00:24:25.803 "superblock": false, 00:24:25.803 "num_base_bdevs": 4, 00:24:25.803 "num_base_bdevs_discovered": 3, 00:24:25.803 "num_base_bdevs_operational": 3, 00:24:25.803 "process": { 00:24:25.803 "type": "rebuild", 00:24:25.803 "target": "spare", 00:24:25.803 "progress": { 00:24:25.803 "blocks": 38912, 00:24:25.803 "percent": 59 00:24:25.803 } 00:24:25.803 }, 00:24:25.803 "base_bdevs_list": [ 00:24:25.803 { 00:24:25.803 "name": "spare", 00:24:25.803 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:25.803 "is_configured": true, 00:24:25.803 "data_offset": 0, 00:24:25.803 "data_size": 65536 00:24:25.803 }, 00:24:25.803 { 00:24:25.803 "name": null, 00:24:25.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.804 "is_configured": false, 00:24:25.804 "data_offset": 0, 00:24:25.804 "data_size": 65536 00:24:25.804 }, 00:24:25.804 { 00:24:25.804 "name": "BaseBdev3", 00:24:25.804 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:25.804 "is_configured": true, 00:24:25.804 "data_offset": 0, 00:24:25.804 "data_size": 65536 00:24:25.804 }, 00:24:25.804 { 00:24:25.804 "name": "BaseBdev4", 00:24:25.804 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:25.804 "is_configured": true, 00:24:25.804 "data_offset": 0, 00:24:25.804 "data_size": 65536 00:24:25.804 } 00:24:25.804 ] 00:24:25.804 }' 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@657 -- # local timeout=530 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.804 13:08:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.074 13:08:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:26.074 "name": "raid_bdev1", 00:24:26.074 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:26.074 "strip_size_kb": 0, 00:24:26.074 "state": "online", 00:24:26.074 "raid_level": "raid1", 00:24:26.074 "superblock": false, 00:24:26.074 "num_base_bdevs": 4, 00:24:26.074 "num_base_bdevs_discovered": 3, 00:24:26.074 "num_base_bdevs_operational": 3, 00:24:26.074 "process": { 00:24:26.074 "type": "rebuild", 00:24:26.074 "target": "spare", 00:24:26.074 "progress": { 00:24:26.074 "blocks": 45056, 00:24:26.074 "percent": 68 00:24:26.074 } 00:24:26.074 }, 00:24:26.074 "base_bdevs_list": [ 00:24:26.074 { 00:24:26.074 "name": "spare", 00:24:26.074 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:26.074 "is_configured": true, 00:24:26.074 "data_offset": 0, 00:24:26.074 "data_size": 65536 00:24:26.074 }, 00:24:26.074 { 00:24:26.074 "name": null, 00:24:26.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.074 "is_configured": false, 00:24:26.074 "data_offset": 0, 00:24:26.074 "data_size": 65536 00:24:26.074 }, 00:24:26.074 { 00:24:26.074 "name": "BaseBdev3", 00:24:26.074 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:26.074 "is_configured": true, 00:24:26.074 "data_offset": 0, 00:24:26.074 "data_size": 65536 00:24:26.074 }, 00:24:26.074 { 00:24:26.074 "name": "BaseBdev4", 00:24:26.074 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:26.074 "is_configured": true, 00:24:26.074 "data_offset": 0, 00:24:26.074 "data_size": 65536 00:24:26.074 } 00:24:26.074 ] 00:24:26.074 }' 00:24:26.074 13:08:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:26.074 13:08:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:26.074 13:08:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:26.333 13:08:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:26.333 13:08:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:26.900 [2024-04-17 13:08:31.022325] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:26.900 [2024-04-17 13:08:31.022653] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:26.900 [2024-04-17 13:08:31.022852] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.159 13:08:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.418 13:08:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.418 "name": "raid_bdev1", 00:24:27.418 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:27.418 "strip_size_kb": 0, 00:24:27.418 "state": "online", 00:24:27.418 "raid_level": "raid1", 00:24:27.418 "superblock": false, 00:24:27.418 "num_base_bdevs": 4, 00:24:27.418 "num_base_bdevs_discovered": 3, 00:24:27.418 "num_base_bdevs_operational": 3, 00:24:27.418 "base_bdevs_list": [ 00:24:27.418 { 00:24:27.418 "name": "spare", 00:24:27.418 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:27.418 "is_configured": true, 00:24:27.418 "data_offset": 0, 00:24:27.418 "data_size": 65536 00:24:27.418 }, 00:24:27.418 { 00:24:27.418 "name": null, 00:24:27.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.418 "is_configured": false, 00:24:27.418 "data_offset": 0, 00:24:27.418 "data_size": 65536 00:24:27.418 }, 00:24:27.418 { 00:24:27.418 "name": "BaseBdev3", 00:24:27.418 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:27.418 "is_configured": true, 00:24:27.418 "data_offset": 0, 00:24:27.418 "data_size": 65536 00:24:27.418 }, 00:24:27.418 { 00:24:27.418 "name": "BaseBdev4", 00:24:27.418 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:27.418 "is_configured": true, 00:24:27.418 "data_offset": 0, 00:24:27.418 "data_size": 65536 00:24:27.418 } 00:24:27.418 ] 00:24:27.418 }' 00:24:27.418 13:08:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@660 -- # break 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.676 13:08:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.935 "name": "raid_bdev1", 00:24:27.935 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:27.935 "strip_size_kb": 0, 00:24:27.935 "state": "online", 00:24:27.935 "raid_level": "raid1", 00:24:27.935 "superblock": false, 00:24:27.935 "num_base_bdevs": 4, 00:24:27.935 "num_base_bdevs_discovered": 3, 00:24:27.935 "num_base_bdevs_operational": 3, 00:24:27.935 "base_bdevs_list": [ 00:24:27.935 { 00:24:27.935 "name": "spare", 00:24:27.935 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:27.935 "is_configured": true, 00:24:27.935 "data_offset": 0, 00:24:27.935 "data_size": 65536 00:24:27.935 }, 00:24:27.935 { 00:24:27.935 "name": null, 00:24:27.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.935 "is_configured": false, 00:24:27.935 "data_offset": 0, 00:24:27.935 "data_size": 65536 00:24:27.935 }, 00:24:27.935 { 00:24:27.935 "name": "BaseBdev3", 00:24:27.935 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:27.935 "is_configured": true, 00:24:27.935 "data_offset": 0, 00:24:27.935 "data_size": 65536 00:24:27.935 }, 00:24:27.935 { 00:24:27.935 "name": "BaseBdev4", 00:24:27.935 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:27.935 "is_configured": true, 00:24:27.935 "data_offset": 0, 00:24:27.935 "data_size": 65536 00:24:27.935 } 00:24:27.935 ] 00:24:27.935 }' 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.935 13:08:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.935 13:08:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.935 13:08:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.194 13:08:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.194 "name": "raid_bdev1", 00:24:28.194 "uuid": "912dc8ce-2734-4a41-9855-83d94f32d631", 00:24:28.194 "strip_size_kb": 0, 00:24:28.194 "state": "online", 00:24:28.194 "raid_level": "raid1", 00:24:28.194 "superblock": false, 00:24:28.194 "num_base_bdevs": 4, 00:24:28.194 "num_base_bdevs_discovered": 3, 00:24:28.194 "num_base_bdevs_operational": 3, 00:24:28.194 "base_bdevs_list": [ 00:24:28.194 { 00:24:28.194 "name": "spare", 00:24:28.194 "uuid": "3b0a6fa0-3f7d-567e-8774-9340d23e7efc", 00:24:28.194 "is_configured": true, 00:24:28.194 "data_offset": 0, 00:24:28.194 "data_size": 65536 00:24:28.194 }, 00:24:28.194 { 00:24:28.194 "name": null, 00:24:28.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.194 "is_configured": false, 00:24:28.194 "data_offset": 0, 00:24:28.194 "data_size": 65536 00:24:28.194 }, 00:24:28.194 { 00:24:28.194 "name": "BaseBdev3", 00:24:28.194 "uuid": "dc99108c-17a3-420b-b097-152cec9a138d", 00:24:28.194 "is_configured": true, 00:24:28.194 "data_offset": 0, 00:24:28.194 "data_size": 65536 00:24:28.194 }, 00:24:28.194 { 00:24:28.194 "name": "BaseBdev4", 00:24:28.194 "uuid": "613a0be3-980c-46ea-96df-131ac2a6eafb", 00:24:28.194 "is_configured": true, 00:24:28.194 "data_offset": 0, 00:24:28.194 "data_size": 65536 00:24:28.195 } 00:24:28.195 ] 00:24:28.195 }' 00:24:28.195 13:08:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.195 13:08:32 -- common/autotest_common.sh@10 -- # set +x 00:24:29.130 13:08:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:29.130 [2024-04-17 13:08:33.197269] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:29.130 [2024-04-17 13:08:33.197516] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.130 [2024-04-17 13:08:33.197714] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.130 [2024-04-17 13:08:33.197924] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:29.130 [2024-04-17 13:08:33.198039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:29.130 13:08:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.130 13:08:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:29.390 13:08:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:29.390 13:08:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:29.390 13:08:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@12 -- # local i 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.390 13:08:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:29.649 /dev/nbd0 00:24:29.649 13:08:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:29.649 13:08:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:29.649 13:08:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:29.649 13:08:33 -- common/autotest_common.sh@855 -- # local i 00:24:29.649 13:08:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:29.649 13:08:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:29.649 13:08:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:29.649 13:08:33 -- common/autotest_common.sh@859 -- # break 00:24:29.649 13:08:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:29.649 13:08:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:29.649 13:08:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.649 1+0 records in 00:24:29.649 1+0 records out 00:24:29.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256172 s, 16.0 MB/s 00:24:29.649 13:08:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.649 13:08:33 -- common/autotest_common.sh@872 -- # size=4096 00:24:29.649 13:08:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.649 13:08:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:29.649 13:08:33 -- common/autotest_common.sh@875 -- # return 0 00:24:29.649 13:08:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.649 13:08:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.649 13:08:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:29.907 /dev/nbd1 00:24:29.907 13:08:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:29.907 13:08:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:29.907 13:08:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:29.907 13:08:34 -- common/autotest_common.sh@855 -- # local i 00:24:29.907 13:08:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:29.908 13:08:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:29.908 13:08:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:29.908 13:08:34 -- common/autotest_common.sh@859 -- # break 00:24:29.908 13:08:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:29.908 13:08:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:29.908 13:08:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.908 1+0 records in 00:24:29.908 1+0 records out 00:24:29.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408807 s, 10.0 MB/s 00:24:29.908 13:08:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.908 13:08:34 -- common/autotest_common.sh@872 -- # size=4096 00:24:29.908 13:08:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.908 13:08:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:29.908 13:08:34 -- common/autotest_common.sh@875 -- # return 0 00:24:29.908 13:08:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.908 13:08:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.908 13:08:34 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:30.166 13:08:34 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@51 -- # local i 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:30.166 13:08:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:30.424 13:08:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@41 -- # break 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:30.425 13:08:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@41 -- # break 00:24:30.991 13:08:34 -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.991 13:08:34 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:30.991 13:08:34 -- bdev/bdev_raid.sh@709 -- # killprocess 132992 00:24:30.991 13:08:34 -- common/autotest_common.sh@924 -- # '[' -z 132992 ']' 00:24:30.991 13:08:34 -- common/autotest_common.sh@928 -- # kill -0 132992 00:24:30.991 13:08:34 -- common/autotest_common.sh@929 -- # uname 00:24:30.991 13:08:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:30.991 13:08:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 132992 00:24:30.991 13:08:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:30.991 13:08:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:30.991 13:08:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 132992' 00:24:30.991 killing process with pid 132992 00:24:30.991 13:08:34 -- common/autotest_common.sh@943 -- # kill 132992 00:24:30.991 Received shutdown signal, test time was about 60.000000 seconds 00:24:30.991 00:24:30.991 Latency(us) 00:24:30.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.991 =================================================================================================================== 00:24:30.991 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:30.991 13:08:34 -- common/autotest_common.sh@948 -- # wait 132992 00:24:30.991 [2024-04-17 13:08:34.978933] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:31.249 [2024-04-17 13:08:35.392318] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:32.659 ************************************ 00:24:32.659 END TEST raid_rebuild_test 00:24:32.659 ************************************ 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:32.659 00:24:32.659 real 0m24.905s 00:24:32.659 user 0m34.821s 00:24:32.659 sys 0m3.779s 00:24:32.659 13:08:36 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:24:32.659 13:08:36 -- common/autotest_common.sh@10 -- # set +x 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:24:32.659 13:08:36 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:24:32.659 13:08:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:32.659 13:08:36 -- common/autotest_common.sh@10 -- # set +x 00:24:32.659 ************************************ 00:24:32.659 START TEST raid_rebuild_test_sb 00:24:32.659 ************************************ 00:24:32.659 13:08:36 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 4 true false 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=133622 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:32.659 13:08:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133622 /var/tmp/spdk-raid.sock 00:24:32.659 13:08:36 -- common/autotest_common.sh@817 -- # '[' -z 133622 ']' 00:24:32.659 13:08:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:32.659 13:08:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:32.659 13:08:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:32.659 13:08:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.659 13:08:36 -- common/autotest_common.sh@10 -- # set +x 00:24:32.659 [2024-04-17 13:08:36.649082] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:24:32.659 [2024-04-17 13:08:36.649416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133622 ] 00:24:32.659 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:32.659 Zero copy mechanism will not be used. 00:24:32.917 [2024-04-17 13:08:36.807139] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.917 [2024-04-17 13:08:37.026416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.175 [2024-04-17 13:08:37.222882] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:33.743 13:08:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.743 13:08:37 -- common/autotest_common.sh@850 -- # return 0 00:24:33.743 13:08:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:33.743 13:08:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:33.743 13:08:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:34.002 BaseBdev1_malloc 00:24:34.002 13:08:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:34.002 [2024-04-17 13:08:38.121598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:34.002 [2024-04-17 13:08:38.121850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.002 [2024-04-17 13:08:38.121941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:34.002 [2024-04-17 13:08:38.122183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.002 [2024-04-17 13:08:38.124847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.002 [2024-04-17 13:08:38.125017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:34.002 BaseBdev1 00:24:34.002 13:08:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:34.002 13:08:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:34.002 13:08:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:34.260 BaseBdev2_malloc 00:24:34.519 13:08:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:34.778 [2024-04-17 13:08:38.675373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:34.778 [2024-04-17 13:08:38.675725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.778 [2024-04-17 13:08:38.675907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:34.778 [2024-04-17 13:08:38.676063] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.778 [2024-04-17 13:08:38.678644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.778 [2024-04-17 13:08:38.678809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:34.778 BaseBdev2 00:24:34.778 13:08:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:34.778 13:08:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:34.778 13:08:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:35.037 BaseBdev3_malloc 00:24:35.037 13:08:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:35.296 [2024-04-17 13:08:39.242677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:35.296 [2024-04-17 13:08:39.242946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.296 [2024-04-17 13:08:39.243101] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:35.296 [2024-04-17 13:08:39.243241] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.296 [2024-04-17 13:08:39.245856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.296 [2024-04-17 13:08:39.246023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:35.296 BaseBdev3 00:24:35.296 13:08:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:35.296 13:08:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:35.296 13:08:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:35.555 BaseBdev4_malloc 00:24:35.555 13:08:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:35.814 [2024-04-17 13:08:39.805550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:35.814 [2024-04-17 13:08:39.805832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.814 [2024-04-17 13:08:39.805984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:35.814 [2024-04-17 13:08:39.806132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.814 [2024-04-17 13:08:39.808747] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.814 [2024-04-17 13:08:39.808922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:35.814 BaseBdev4 00:24:35.814 13:08:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:36.072 spare_malloc 00:24:36.072 13:08:40 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:36.331 spare_delay 00:24:36.331 13:08:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:36.590 [2024-04-17 13:08:40.680772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:36.590 [2024-04-17 13:08:40.681114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.590 [2024-04-17 13:08:40.681290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:36.590 [2024-04-17 13:08:40.681439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.590 [2024-04-17 13:08:40.684004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.590 [2024-04-17 13:08:40.684179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:36.590 spare 00:24:36.590 13:08:40 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:36.849 [2024-04-17 13:08:40.937007] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:36.849 [2024-04-17 13:08:40.939420] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:36.849 [2024-04-17 13:08:40.939634] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:36.849 [2024-04-17 13:08:40.939832] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:36.849 [2024-04-17 13:08:40.940163] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:24:36.849 [2024-04-17 13:08:40.940292] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:36.849 [2024-04-17 13:08:40.940489] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:36.849 [2024-04-17 13:08:40.940997] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:24:36.849 [2024-04-17 13:08:40.941112] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:24:36.849 [2024-04-17 13:08:40.941419] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.849 13:08:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.108 13:08:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:37.108 "name": "raid_bdev1", 00:24:37.108 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:37.108 "strip_size_kb": 0, 00:24:37.108 "state": "online", 00:24:37.108 "raid_level": "raid1", 00:24:37.108 "superblock": true, 00:24:37.108 "num_base_bdevs": 4, 00:24:37.108 "num_base_bdevs_discovered": 4, 00:24:37.108 "num_base_bdevs_operational": 4, 00:24:37.108 "base_bdevs_list": [ 00:24:37.108 { 00:24:37.108 "name": "BaseBdev1", 00:24:37.108 "uuid": "cf62c71e-5a0a-5379-a504-9bd6db054c6b", 00:24:37.108 "is_configured": true, 00:24:37.108 "data_offset": 2048, 00:24:37.108 "data_size": 63488 00:24:37.108 }, 00:24:37.108 { 00:24:37.108 "name": "BaseBdev2", 00:24:37.108 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:37.108 "is_configured": true, 00:24:37.108 "data_offset": 2048, 00:24:37.108 "data_size": 63488 00:24:37.108 }, 00:24:37.108 { 00:24:37.108 "name": "BaseBdev3", 00:24:37.108 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:37.108 "is_configured": true, 00:24:37.108 "data_offset": 2048, 00:24:37.108 "data_size": 63488 00:24:37.108 }, 00:24:37.108 { 00:24:37.108 "name": "BaseBdev4", 00:24:37.108 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:37.108 "is_configured": true, 00:24:37.108 "data_offset": 2048, 00:24:37.108 "data_size": 63488 00:24:37.108 } 00:24:37.108 ] 00:24:37.108 }' 00:24:37.108 13:08:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:37.108 13:08:41 -- common/autotest_common.sh@10 -- # set +x 00:24:38.044 13:08:41 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:38.044 13:08:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:38.044 [2024-04-17 13:08:42.158001] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.044 13:08:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:38.044 13:08:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.044 13:08:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:38.302 13:08:42 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:38.302 13:08:42 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:38.302 13:08:42 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:38.302 13:08:42 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@12 -- # local i 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.302 13:08:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:38.560 [2024-04-17 13:08:42.665845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:38.560 /dev/nbd0 00:24:38.560 13:08:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:38.819 13:08:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:38.819 13:08:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:38.819 13:08:42 -- common/autotest_common.sh@855 -- # local i 00:24:38.819 13:08:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:38.819 13:08:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:38.819 13:08:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:38.819 13:08:42 -- common/autotest_common.sh@859 -- # break 00:24:38.819 13:08:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:38.819 13:08:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:38.819 13:08:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:38.819 1+0 records in 00:24:38.819 1+0 records out 00:24:38.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765368 s, 5.4 MB/s 00:24:38.819 13:08:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.819 13:08:42 -- common/autotest_common.sh@872 -- # size=4096 00:24:38.819 13:08:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:38.819 13:08:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:38.819 13:08:42 -- common/autotest_common.sh@875 -- # return 0 00:24:38.819 13:08:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:38.819 13:08:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.819 13:08:42 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:38.819 13:08:42 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:38.819 13:08:42 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:47.006 63488+0 records in 00:24:47.006 63488+0 records out 00:24:47.006 32505856 bytes (33 MB, 31 MiB) copied, 7.45703 s, 4.4 MB/s 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@51 -- # local i 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:47.006 [2024-04-17 13:08:50.472535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@41 -- # break 00:24:47.006 13:08:50 -- bdev/nbd_common.sh@45 -- # return 0 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:47.006 [2024-04-17 13:08:50.832208] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.006 13:08:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.006 13:08:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.006 "name": "raid_bdev1", 00:24:47.006 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:47.006 "strip_size_kb": 0, 00:24:47.006 "state": "online", 00:24:47.006 "raid_level": "raid1", 00:24:47.006 "superblock": true, 00:24:47.006 "num_base_bdevs": 4, 00:24:47.006 "num_base_bdevs_discovered": 3, 00:24:47.006 "num_base_bdevs_operational": 3, 00:24:47.006 "base_bdevs_list": [ 00:24:47.006 { 00:24:47.006 "name": null, 00:24:47.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.006 "is_configured": false, 00:24:47.006 "data_offset": 2048, 00:24:47.006 "data_size": 63488 00:24:47.006 }, 00:24:47.006 { 00:24:47.006 "name": "BaseBdev2", 00:24:47.006 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:47.006 "is_configured": true, 00:24:47.006 "data_offset": 2048, 00:24:47.006 "data_size": 63488 00:24:47.006 }, 00:24:47.006 { 00:24:47.006 "name": "BaseBdev3", 00:24:47.006 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:47.006 "is_configured": true, 00:24:47.006 "data_offset": 2048, 00:24:47.006 "data_size": 63488 00:24:47.006 }, 00:24:47.006 { 00:24:47.006 "name": "BaseBdev4", 00:24:47.006 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:47.006 "is_configured": true, 00:24:47.006 "data_offset": 2048, 00:24:47.006 "data_size": 63488 00:24:47.006 } 00:24:47.006 ] 00:24:47.006 }' 00:24:47.006 13:08:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.006 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:24:47.941 13:08:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:47.941 [2024-04-17 13:08:52.080441] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:47.941 [2024-04-17 13:08:52.080690] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:48.199 [2024-04-17 13:08:52.093881] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:24:48.199 [2024-04-17 13:08:52.096345] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:48.199 13:08:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.135 13:08:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:49.394 "name": "raid_bdev1", 00:24:49.394 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:49.394 "strip_size_kb": 0, 00:24:49.394 "state": "online", 00:24:49.394 "raid_level": "raid1", 00:24:49.394 "superblock": true, 00:24:49.394 "num_base_bdevs": 4, 00:24:49.394 "num_base_bdevs_discovered": 4, 00:24:49.394 "num_base_bdevs_operational": 4, 00:24:49.394 "process": { 00:24:49.394 "type": "rebuild", 00:24:49.394 "target": "spare", 00:24:49.394 "progress": { 00:24:49.394 "blocks": 26624, 00:24:49.394 "percent": 41 00:24:49.394 } 00:24:49.394 }, 00:24:49.394 "base_bdevs_list": [ 00:24:49.394 { 00:24:49.394 "name": "spare", 00:24:49.394 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:49.394 "is_configured": true, 00:24:49.394 "data_offset": 2048, 00:24:49.394 "data_size": 63488 00:24:49.394 }, 00:24:49.394 { 00:24:49.394 "name": "BaseBdev2", 00:24:49.394 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:49.394 "is_configured": true, 00:24:49.394 "data_offset": 2048, 00:24:49.394 "data_size": 63488 00:24:49.394 }, 00:24:49.394 { 00:24:49.394 "name": "BaseBdev3", 00:24:49.394 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:49.394 "is_configured": true, 00:24:49.394 "data_offset": 2048, 00:24:49.394 "data_size": 63488 00:24:49.394 }, 00:24:49.394 { 00:24:49.394 "name": "BaseBdev4", 00:24:49.394 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:49.394 "is_configured": true, 00:24:49.394 "data_offset": 2048, 00:24:49.394 "data_size": 63488 00:24:49.394 } 00:24:49.394 ] 00:24:49.394 }' 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.394 13:08:53 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:49.652 [2024-04-17 13:08:53.774949] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:49.910 [2024-04-17 13:08:53.806964] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:49.910 [2024-04-17 13:08:53.807314] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.910 13:08:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.168 13:08:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.168 "name": "raid_bdev1", 00:24:50.168 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:50.168 "strip_size_kb": 0, 00:24:50.168 "state": "online", 00:24:50.168 "raid_level": "raid1", 00:24:50.168 "superblock": true, 00:24:50.168 "num_base_bdevs": 4, 00:24:50.168 "num_base_bdevs_discovered": 3, 00:24:50.168 "num_base_bdevs_operational": 3, 00:24:50.168 "base_bdevs_list": [ 00:24:50.168 { 00:24:50.168 "name": null, 00:24:50.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.168 "is_configured": false, 00:24:50.168 "data_offset": 2048, 00:24:50.168 "data_size": 63488 00:24:50.168 }, 00:24:50.168 { 00:24:50.168 "name": "BaseBdev2", 00:24:50.168 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:50.168 "is_configured": true, 00:24:50.168 "data_offset": 2048, 00:24:50.168 "data_size": 63488 00:24:50.168 }, 00:24:50.168 { 00:24:50.168 "name": "BaseBdev3", 00:24:50.168 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:50.168 "is_configured": true, 00:24:50.168 "data_offset": 2048, 00:24:50.168 "data_size": 63488 00:24:50.168 }, 00:24:50.168 { 00:24:50.168 "name": "BaseBdev4", 00:24:50.168 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:50.168 "is_configured": true, 00:24:50.168 "data_offset": 2048, 00:24:50.168 "data_size": 63488 00:24:50.168 } 00:24:50.168 ] 00:24:50.168 }' 00:24:50.168 13:08:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.168 13:08:54 -- common/autotest_common.sh@10 -- # set +x 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.732 13:08:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.990 13:08:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:50.990 "name": "raid_bdev1", 00:24:50.990 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:50.990 "strip_size_kb": 0, 00:24:50.990 "state": "online", 00:24:50.990 "raid_level": "raid1", 00:24:50.990 "superblock": true, 00:24:50.990 "num_base_bdevs": 4, 00:24:50.990 "num_base_bdevs_discovered": 3, 00:24:50.990 "num_base_bdevs_operational": 3, 00:24:50.990 "base_bdevs_list": [ 00:24:50.990 { 00:24:50.990 "name": null, 00:24:50.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.990 "is_configured": false, 00:24:50.990 "data_offset": 2048, 00:24:50.990 "data_size": 63488 00:24:50.990 }, 00:24:50.990 { 00:24:50.990 "name": "BaseBdev2", 00:24:50.990 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:50.990 "is_configured": true, 00:24:50.990 "data_offset": 2048, 00:24:50.990 "data_size": 63488 00:24:50.990 }, 00:24:50.990 { 00:24:50.990 "name": "BaseBdev3", 00:24:50.990 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:50.990 "is_configured": true, 00:24:50.990 "data_offset": 2048, 00:24:50.990 "data_size": 63488 00:24:50.990 }, 00:24:50.990 { 00:24:50.990 "name": "BaseBdev4", 00:24:50.990 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:50.990 "is_configured": true, 00:24:50.990 "data_offset": 2048, 00:24:50.990 "data_size": 63488 00:24:50.990 } 00:24:50.990 ] 00:24:50.990 }' 00:24:50.990 13:08:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:50.990 13:08:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:50.990 13:08:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:51.248 13:08:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:51.248 13:08:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:51.506 [2024-04-17 13:08:55.413610] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:51.506 [2024-04-17 13:08:55.413917] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:51.506 [2024-04-17 13:08:55.426296] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:24:51.506 [2024-04-17 13:08:55.428730] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:51.506 13:08:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.442 13:08:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.700 13:08:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:52.700 "name": "raid_bdev1", 00:24:52.700 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:52.700 "strip_size_kb": 0, 00:24:52.700 "state": "online", 00:24:52.700 "raid_level": "raid1", 00:24:52.700 "superblock": true, 00:24:52.700 "num_base_bdevs": 4, 00:24:52.700 "num_base_bdevs_discovered": 4, 00:24:52.700 "num_base_bdevs_operational": 4, 00:24:52.700 "process": { 00:24:52.700 "type": "rebuild", 00:24:52.700 "target": "spare", 00:24:52.700 "progress": { 00:24:52.700 "blocks": 24576, 00:24:52.700 "percent": 38 00:24:52.700 } 00:24:52.700 }, 00:24:52.700 "base_bdevs_list": [ 00:24:52.700 { 00:24:52.700 "name": "spare", 00:24:52.700 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:52.700 "is_configured": true, 00:24:52.700 "data_offset": 2048, 00:24:52.700 "data_size": 63488 00:24:52.700 }, 00:24:52.700 { 00:24:52.700 "name": "BaseBdev2", 00:24:52.700 "uuid": "85b41573-ff62-5229-89f9-f17e3213bd5f", 00:24:52.700 "is_configured": true, 00:24:52.700 "data_offset": 2048, 00:24:52.700 "data_size": 63488 00:24:52.700 }, 00:24:52.700 { 00:24:52.700 "name": "BaseBdev3", 00:24:52.701 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:52.701 "is_configured": true, 00:24:52.701 "data_offset": 2048, 00:24:52.701 "data_size": 63488 00:24:52.701 }, 00:24:52.701 { 00:24:52.701 "name": "BaseBdev4", 00:24:52.701 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:52.701 "is_configured": true, 00:24:52.701 "data_offset": 2048, 00:24:52.701 "data_size": 63488 00:24:52.701 } 00:24:52.701 ] 00:24:52.701 }' 00:24:52.701 13:08:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:52.701 13:08:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:52.701 13:08:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:52.959 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:52.959 13:08:56 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:53.218 [2024-04-17 13:08:57.126787] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.218 [2024-04-17 13:08:57.139896] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.218 13:08:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.476 13:08:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:53.476 "name": "raid_bdev1", 00:24:53.476 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:53.476 "strip_size_kb": 0, 00:24:53.476 "state": "online", 00:24:53.476 "raid_level": "raid1", 00:24:53.476 "superblock": true, 00:24:53.476 "num_base_bdevs": 4, 00:24:53.476 "num_base_bdevs_discovered": 3, 00:24:53.476 "num_base_bdevs_operational": 3, 00:24:53.476 "process": { 00:24:53.476 "type": "rebuild", 00:24:53.476 "target": "spare", 00:24:53.476 "progress": { 00:24:53.476 "blocks": 40960, 00:24:53.476 "percent": 64 00:24:53.476 } 00:24:53.476 }, 00:24:53.476 "base_bdevs_list": [ 00:24:53.476 { 00:24:53.476 "name": "spare", 00:24:53.476 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:53.476 "is_configured": true, 00:24:53.476 "data_offset": 2048, 00:24:53.476 "data_size": 63488 00:24:53.476 }, 00:24:53.476 { 00:24:53.476 "name": null, 00:24:53.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.476 "is_configured": false, 00:24:53.476 "data_offset": 2048, 00:24:53.476 "data_size": 63488 00:24:53.476 }, 00:24:53.476 { 00:24:53.476 "name": "BaseBdev3", 00:24:53.476 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:53.476 "is_configured": true, 00:24:53.476 "data_offset": 2048, 00:24:53.476 "data_size": 63488 00:24:53.476 }, 00:24:53.476 { 00:24:53.476 "name": "BaseBdev4", 00:24:53.476 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:53.476 "is_configured": true, 00:24:53.476 "data_offset": 2048, 00:24:53.477 "data_size": 63488 00:24:53.477 } 00:24:53.477 ] 00:24:53.477 }' 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@657 -- # local timeout=558 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.477 13:08:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.734 13:08:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:53.734 "name": "raid_bdev1", 00:24:53.734 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:53.734 "strip_size_kb": 0, 00:24:53.734 "state": "online", 00:24:53.734 "raid_level": "raid1", 00:24:53.734 "superblock": true, 00:24:53.734 "num_base_bdevs": 4, 00:24:53.734 "num_base_bdevs_discovered": 3, 00:24:53.734 "num_base_bdevs_operational": 3, 00:24:53.734 "process": { 00:24:53.734 "type": "rebuild", 00:24:53.734 "target": "spare", 00:24:53.734 "progress": { 00:24:53.734 "blocks": 47104, 00:24:53.734 "percent": 74 00:24:53.734 } 00:24:53.734 }, 00:24:53.734 "base_bdevs_list": [ 00:24:53.734 { 00:24:53.734 "name": "spare", 00:24:53.734 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:53.734 "is_configured": true, 00:24:53.735 "data_offset": 2048, 00:24:53.735 "data_size": 63488 00:24:53.735 }, 00:24:53.735 { 00:24:53.735 "name": null, 00:24:53.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.735 "is_configured": false, 00:24:53.735 "data_offset": 2048, 00:24:53.735 "data_size": 63488 00:24:53.735 }, 00:24:53.735 { 00:24:53.735 "name": "BaseBdev3", 00:24:53.735 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:53.735 "is_configured": true, 00:24:53.735 "data_offset": 2048, 00:24:53.735 "data_size": 63488 00:24:53.735 }, 00:24:53.735 { 00:24:53.735 "name": "BaseBdev4", 00:24:53.735 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:53.735 "is_configured": true, 00:24:53.735 "data_offset": 2048, 00:24:53.735 "data_size": 63488 00:24:53.735 } 00:24:53.735 ] 00:24:53.735 }' 00:24:53.735 13:08:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:53.992 13:08:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:53.992 13:08:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:53.992 13:08:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:53.992 13:08:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:54.558 [2024-04-17 13:08:58.548838] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:54.558 [2024-04-17 13:08:58.549159] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:54.558 [2024-04-17 13:08:58.549476] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.126 13:08:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.384 "name": "raid_bdev1", 00:24:55.384 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:55.384 "strip_size_kb": 0, 00:24:55.384 "state": "online", 00:24:55.384 "raid_level": "raid1", 00:24:55.384 "superblock": true, 00:24:55.384 "num_base_bdevs": 4, 00:24:55.384 "num_base_bdevs_discovered": 3, 00:24:55.384 "num_base_bdevs_operational": 3, 00:24:55.384 "base_bdevs_list": [ 00:24:55.384 { 00:24:55.384 "name": "spare", 00:24:55.384 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:55.384 "is_configured": true, 00:24:55.384 "data_offset": 2048, 00:24:55.384 "data_size": 63488 00:24:55.384 }, 00:24:55.384 { 00:24:55.384 "name": null, 00:24:55.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.384 "is_configured": false, 00:24:55.384 "data_offset": 2048, 00:24:55.384 "data_size": 63488 00:24:55.384 }, 00:24:55.384 { 00:24:55.384 "name": "BaseBdev3", 00:24:55.384 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:55.384 "is_configured": true, 00:24:55.384 "data_offset": 2048, 00:24:55.384 "data_size": 63488 00:24:55.384 }, 00:24:55.384 { 00:24:55.384 "name": "BaseBdev4", 00:24:55.384 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:55.384 "is_configured": true, 00:24:55.384 "data_offset": 2048, 00:24:55.384 "data_size": 63488 00:24:55.384 } 00:24:55.384 ] 00:24:55.384 }' 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@660 -- # break 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.384 13:08:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:55.642 "name": "raid_bdev1", 00:24:55.642 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:55.642 "strip_size_kb": 0, 00:24:55.642 "state": "online", 00:24:55.642 "raid_level": "raid1", 00:24:55.642 "superblock": true, 00:24:55.642 "num_base_bdevs": 4, 00:24:55.642 "num_base_bdevs_discovered": 3, 00:24:55.642 "num_base_bdevs_operational": 3, 00:24:55.642 "base_bdevs_list": [ 00:24:55.642 { 00:24:55.642 "name": "spare", 00:24:55.642 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:55.642 "is_configured": true, 00:24:55.642 "data_offset": 2048, 00:24:55.642 "data_size": 63488 00:24:55.642 }, 00:24:55.642 { 00:24:55.642 "name": null, 00:24:55.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.642 "is_configured": false, 00:24:55.642 "data_offset": 2048, 00:24:55.642 "data_size": 63488 00:24:55.642 }, 00:24:55.642 { 00:24:55.642 "name": "BaseBdev3", 00:24:55.642 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:55.642 "is_configured": true, 00:24:55.642 "data_offset": 2048, 00:24:55.642 "data_size": 63488 00:24:55.642 }, 00:24:55.642 { 00:24:55.642 "name": "BaseBdev4", 00:24:55.642 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:55.642 "is_configured": true, 00:24:55.642 "data_offset": 2048, 00:24:55.642 "data_size": 63488 00:24:55.642 } 00:24:55.642 ] 00:24:55.642 }' 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:55.642 13:08:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.643 13:08:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.901 13:09:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.901 "name": "raid_bdev1", 00:24:55.901 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:24:55.901 "strip_size_kb": 0, 00:24:55.901 "state": "online", 00:24:55.901 "raid_level": "raid1", 00:24:55.901 "superblock": true, 00:24:55.901 "num_base_bdevs": 4, 00:24:55.901 "num_base_bdevs_discovered": 3, 00:24:55.901 "num_base_bdevs_operational": 3, 00:24:55.901 "base_bdevs_list": [ 00:24:55.901 { 00:24:55.901 "name": "spare", 00:24:55.901 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:24:55.901 "is_configured": true, 00:24:55.901 "data_offset": 2048, 00:24:55.901 "data_size": 63488 00:24:55.901 }, 00:24:55.901 { 00:24:55.901 "name": null, 00:24:55.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.901 "is_configured": false, 00:24:55.901 "data_offset": 2048, 00:24:55.901 "data_size": 63488 00:24:55.901 }, 00:24:55.901 { 00:24:55.901 "name": "BaseBdev3", 00:24:55.901 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:24:55.901 "is_configured": true, 00:24:55.901 "data_offset": 2048, 00:24:55.901 "data_size": 63488 00:24:55.901 }, 00:24:55.901 { 00:24:55.901 "name": "BaseBdev4", 00:24:55.901 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:24:55.901 "is_configured": true, 00:24:55.901 "data_offset": 2048, 00:24:55.901 "data_size": 63488 00:24:55.901 } 00:24:55.901 ] 00:24:55.901 }' 00:24:55.901 13:09:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.901 13:09:00 -- common/autotest_common.sh@10 -- # set +x 00:24:56.861 13:09:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:57.121 [2024-04-17 13:09:01.019413] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:57.121 [2024-04-17 13:09:01.019642] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:57.121 [2024-04-17 13:09:01.019857] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:57.121 [2024-04-17 13:09:01.020070] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:57.121 [2024-04-17 13:09:01.020220] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:24:57.121 13:09:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:57.121 13:09:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.380 13:09:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:57.380 13:09:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:57.380 13:09:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@12 -- # local i 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:57.380 13:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:57.639 /dev/nbd0 00:24:57.639 13:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:57.639 13:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:57.639 13:09:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:57.639 13:09:01 -- common/autotest_common.sh@855 -- # local i 00:24:57.639 13:09:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:57.639 13:09:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:57.639 13:09:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:57.639 13:09:01 -- common/autotest_common.sh@859 -- # break 00:24:57.639 13:09:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:57.639 13:09:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:57.639 13:09:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.639 1+0 records in 00:24:57.639 1+0 records out 00:24:57.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044764 s, 9.2 MB/s 00:24:57.639 13:09:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.639 13:09:01 -- common/autotest_common.sh@872 -- # size=4096 00:24:57.639 13:09:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.639 13:09:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:57.639 13:09:01 -- common/autotest_common.sh@875 -- # return 0 00:24:57.639 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:57.639 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:57.639 13:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:57.899 /dev/nbd1 00:24:57.899 13:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:57.899 13:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:57.899 13:09:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:57.899 13:09:01 -- common/autotest_common.sh@855 -- # local i 00:24:57.899 13:09:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:57.899 13:09:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:57.899 13:09:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:57.899 13:09:01 -- common/autotest_common.sh@859 -- # break 00:24:57.899 13:09:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:57.899 13:09:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:57.899 13:09:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.899 1+0 records in 00:24:57.899 1+0 records out 00:24:57.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650538 s, 6.3 MB/s 00:24:57.899 13:09:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.899 13:09:01 -- common/autotest_common.sh@872 -- # size=4096 00:24:57.899 13:09:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.899 13:09:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:57.899 13:09:01 -- common/autotest_common.sh@875 -- # return 0 00:24:57.899 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:57.899 13:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:57.899 13:09:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:58.158 13:09:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@51 -- # local i 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.158 13:09:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@41 -- # break 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:58.417 13:09:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:58.675 13:09:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:58.934 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:58.934 13:09:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:58.934 13:09:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:58.934 13:09:02 -- bdev/nbd_common.sh@41 -- # break 00:24:58.934 13:09:02 -- bdev/nbd_common.sh@45 -- # return 0 00:24:58.934 13:09:02 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:58.934 13:09:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:58.934 13:09:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:58.934 13:09:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:59.192 13:09:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:59.450 [2024-04-17 13:09:03.422199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:59.450 [2024-04-17 13:09:03.422514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.450 [2024-04-17 13:09:03.422669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:59.450 [2024-04-17 13:09:03.422847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.450 [2024-04-17 13:09:03.425703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.450 [2024-04-17 13:09:03.425897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:59.450 [2024-04-17 13:09:03.426172] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:59.450 [2024-04-17 13:09:03.426338] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:59.450 BaseBdev1 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@696 -- # continue 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:59.450 13:09:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:59.708 13:09:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:59.966 [2024-04-17 13:09:03.950489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:59.966 [2024-04-17 13:09:03.950807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.966 [2024-04-17 13:09:03.950887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:59.966 [2024-04-17 13:09:03.951148] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.966 [2024-04-17 13:09:03.951690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.966 [2024-04-17 13:09:03.951925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:59.966 [2024-04-17 13:09:03.952186] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:59.966 [2024-04-17 13:09:03.952298] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:24:59.966 [2024-04-17 13:09:03.952394] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:59.966 [2024-04-17 13:09:03.952448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:24:59.966 [2024-04-17 13:09:03.952610] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:59.966 BaseBdev3 00:24:59.966 13:09:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:59.966 13:09:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:59.966 13:09:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:00.224 13:09:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:00.482 [2024-04-17 13:09:04.426516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:00.482 [2024-04-17 13:09:04.426805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.482 [2024-04-17 13:09:04.426955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:00.482 [2024-04-17 13:09:04.427075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.482 [2024-04-17 13:09:04.427614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.482 [2024-04-17 13:09:04.427794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:00.482 [2024-04-17 13:09:04.428077] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:00.482 [2024-04-17 13:09:04.428241] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:00.482 BaseBdev4 00:25:00.482 13:09:04 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:00.740 13:09:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:00.740 [2024-04-17 13:09:04.870664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:00.740 [2024-04-17 13:09:04.871025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.740 [2024-04-17 13:09:04.871189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:25:00.740 [2024-04-17 13:09:04.871320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.740 [2024-04-17 13:09:04.872046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.740 [2024-04-17 13:09:04.872249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:00.740 [2024-04-17 13:09:04.872489] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:00.740 [2024-04-17 13:09:04.872658] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:00.740 spare 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.000 13:09:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.000 [2024-04-17 13:09:04.972942] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:25:01.000 [2024-04-17 13:09:04.973160] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:01.000 [2024-04-17 13:09:04.973392] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:25:01.000 [2024-04-17 13:09:04.973966] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:25:01.000 [2024-04-17 13:09:04.974123] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:25:01.000 [2024-04-17 13:09:04.974426] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.000 13:09:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.000 "name": "raid_bdev1", 00:25:01.000 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:25:01.000 "strip_size_kb": 0, 00:25:01.000 "state": "online", 00:25:01.000 "raid_level": "raid1", 00:25:01.000 "superblock": true, 00:25:01.000 "num_base_bdevs": 4, 00:25:01.000 "num_base_bdevs_discovered": 3, 00:25:01.000 "num_base_bdevs_operational": 3, 00:25:01.000 "base_bdevs_list": [ 00:25:01.000 { 00:25:01.000 "name": "spare", 00:25:01.000 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:25:01.000 "is_configured": true, 00:25:01.000 "data_offset": 2048, 00:25:01.000 "data_size": 63488 00:25:01.000 }, 00:25:01.000 { 00:25:01.000 "name": null, 00:25:01.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.000 "is_configured": false, 00:25:01.000 "data_offset": 2048, 00:25:01.000 "data_size": 63488 00:25:01.000 }, 00:25:01.000 { 00:25:01.000 "name": "BaseBdev3", 00:25:01.000 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:25:01.000 "is_configured": true, 00:25:01.000 "data_offset": 2048, 00:25:01.000 "data_size": 63488 00:25:01.000 }, 00:25:01.000 { 00:25:01.000 "name": "BaseBdev4", 00:25:01.000 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:25:01.000 "is_configured": true, 00:25:01.000 "data_offset": 2048, 00:25:01.000 "data_size": 63488 00:25:01.000 } 00:25:01.000 ] 00:25:01.000 }' 00:25:01.000 13:09:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.000 13:09:05 -- common/autotest_common.sh@10 -- # set +x 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.936 13:09:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.936 13:09:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:01.936 "name": "raid_bdev1", 00:25:01.936 "uuid": "e92eda87-9a78-4ad8-9f0d-af5f53f16453", 00:25:01.936 "strip_size_kb": 0, 00:25:01.936 "state": "online", 00:25:01.936 "raid_level": "raid1", 00:25:01.936 "superblock": true, 00:25:01.936 "num_base_bdevs": 4, 00:25:01.936 "num_base_bdevs_discovered": 3, 00:25:01.936 "num_base_bdevs_operational": 3, 00:25:01.936 "base_bdevs_list": [ 00:25:01.936 { 00:25:01.936 "name": "spare", 00:25:01.936 "uuid": "502a387f-fc9f-549f-8d0b-db73bf3bc602", 00:25:01.936 "is_configured": true, 00:25:01.936 "data_offset": 2048, 00:25:01.936 "data_size": 63488 00:25:01.936 }, 00:25:01.936 { 00:25:01.936 "name": null, 00:25:01.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.936 "is_configured": false, 00:25:01.936 "data_offset": 2048, 00:25:01.936 "data_size": 63488 00:25:01.936 }, 00:25:01.936 { 00:25:01.936 "name": "BaseBdev3", 00:25:01.936 "uuid": "c3fae121-494e-50a3-8ffb-4fad7a307bc1", 00:25:01.936 "is_configured": true, 00:25:01.936 "data_offset": 2048, 00:25:01.936 "data_size": 63488 00:25:01.936 }, 00:25:01.936 { 00:25:01.936 "name": "BaseBdev4", 00:25:01.936 "uuid": "56d43ea8-7b76-5b45-8df0-8d0eb2cb8c22", 00:25:01.936 "is_configured": true, 00:25:01.936 "data_offset": 2048, 00:25:01.936 "data_size": 63488 00:25:01.936 } 00:25:01.936 ] 00:25:01.936 }' 00:25:01.936 13:09:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:02.195 13:09:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:02.195 13:09:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:02.195 13:09:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:02.195 13:09:06 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.195 13:09:06 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:02.454 13:09:06 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:02.454 13:09:06 -- bdev/bdev_raid.sh@709 -- # killprocess 133622 00:25:02.454 13:09:06 -- common/autotest_common.sh@924 -- # '[' -z 133622 ']' 00:25:02.454 13:09:06 -- common/autotest_common.sh@928 -- # kill -0 133622 00:25:02.454 13:09:06 -- common/autotest_common.sh@929 -- # uname 00:25:02.454 13:09:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:02.454 13:09:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 133622 00:25:02.454 killing process with pid 133622 00:25:02.454 Received shutdown signal, test time was about 60.000000 seconds 00:25:02.454 00:25:02.454 Latency(us) 00:25:02.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.454 =================================================================================================================== 00:25:02.454 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:02.454 13:09:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:02.454 13:09:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:02.454 13:09:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 133622' 00:25:02.454 13:09:06 -- common/autotest_common.sh@943 -- # kill 133622 00:25:02.454 13:09:06 -- common/autotest_common.sh@948 -- # wait 133622 00:25:02.454 [2024-04-17 13:09:06.450435] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:02.454 [2024-04-17 13:09:06.450532] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:02.454 [2024-04-17 13:09:06.450619] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:02.454 [2024-04-17 13:09:06.450632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:25:03.022 [2024-04-17 13:09:06.880713] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:03.958 ************************************ 00:25:03.958 END TEST raid_rebuild_test_sb 00:25:03.958 ************************************ 00:25:03.958 13:09:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:03.958 00:25:03.958 real 0m31.426s 00:25:03.958 user 0m46.078s 00:25:03.958 sys 0m4.820s 00:25:03.958 13:09:08 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:25:03.958 13:09:08 -- common/autotest_common.sh@10 -- # set +x 00:25:03.958 13:09:08 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:25:03.958 13:09:08 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:25:03.958 13:09:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:03.958 13:09:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.217 ************************************ 00:25:04.217 START TEST raid_rebuild_test_io 00:25:04.217 ************************************ 00:25:04.217 13:09:08 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 4 false true 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=134382 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:04.217 13:09:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134382 /var/tmp/spdk-raid.sock 00:25:04.217 13:09:08 -- common/autotest_common.sh@817 -- # '[' -z 134382 ']' 00:25:04.217 13:09:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:04.217 13:09:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.217 13:09:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:04.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:04.217 13:09:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.217 13:09:08 -- common/autotest_common.sh@10 -- # set +x 00:25:04.217 [2024-04-17 13:09:08.182321] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:25:04.217 [2024-04-17 13:09:08.182725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134382 ] 00:25:04.217 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:04.217 Zero copy mechanism will not be used. 00:25:04.217 [2024-04-17 13:09:08.354559] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.476 [2024-04-17 13:09:08.569056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.735 [2024-04-17 13:09:08.773739] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.993 13:09:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:04.993 13:09:09 -- common/autotest_common.sh@850 -- # return 0 00:25:04.993 13:09:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:04.993 13:09:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:04.993 13:09:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:05.558 BaseBdev1 00:25:05.558 13:09:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:05.558 13:09:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:05.558 13:09:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:05.816 BaseBdev2 00:25:05.816 13:09:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:05.816 13:09:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:05.816 13:09:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:06.073 BaseBdev3 00:25:06.073 13:09:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:06.073 13:09:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:06.073 13:09:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:06.334 BaseBdev4 00:25:06.334 13:09:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:06.591 spare_malloc 00:25:06.591 13:09:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:06.847 spare_delay 00:25:06.847 13:09:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:07.103 [2024-04-17 13:09:11.027712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:07.103 [2024-04-17 13:09:11.028055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.103 [2024-04-17 13:09:11.028209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:07.103 [2024-04-17 13:09:11.028376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.103 [2024-04-17 13:09:11.031073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.103 [2024-04-17 13:09:11.031246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:07.103 spare 00:25:07.103 13:09:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:07.383 [2024-04-17 13:09:11.259906] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.383 [2024-04-17 13:09:11.262345] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.383 [2024-04-17 13:09:11.262553] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.383 [2024-04-17 13:09:11.262703] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:07.383 [2024-04-17 13:09:11.262931] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:07.383 [2024-04-17 13:09:11.263058] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:07.383 [2024-04-17 13:09:11.263289] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:07.383 [2024-04-17 13:09:11.263851] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:07.383 [2024-04-17 13:09:11.263972] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:07.383 [2024-04-17 13:09:11.264310] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.383 "name": "raid_bdev1", 00:25:07.383 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:07.383 "strip_size_kb": 0, 00:25:07.383 "state": "online", 00:25:07.383 "raid_level": "raid1", 00:25:07.383 "superblock": false, 00:25:07.383 "num_base_bdevs": 4, 00:25:07.383 "num_base_bdevs_discovered": 4, 00:25:07.383 "num_base_bdevs_operational": 4, 00:25:07.383 "base_bdevs_list": [ 00:25:07.383 { 00:25:07.383 "name": "BaseBdev1", 00:25:07.383 "uuid": "059cf180-ec66-493c-b661-7547caafbd0d", 00:25:07.383 "is_configured": true, 00:25:07.383 "data_offset": 0, 00:25:07.383 "data_size": 65536 00:25:07.383 }, 00:25:07.383 { 00:25:07.383 "name": "BaseBdev2", 00:25:07.383 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:07.383 "is_configured": true, 00:25:07.383 "data_offset": 0, 00:25:07.383 "data_size": 65536 00:25:07.383 }, 00:25:07.383 { 00:25:07.383 "name": "BaseBdev3", 00:25:07.383 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:07.383 "is_configured": true, 00:25:07.383 "data_offset": 0, 00:25:07.383 "data_size": 65536 00:25:07.383 }, 00:25:07.383 { 00:25:07.383 "name": "BaseBdev4", 00:25:07.383 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:07.383 "is_configured": true, 00:25:07.383 "data_offset": 0, 00:25:07.383 "data_size": 65536 00:25:07.383 } 00:25:07.383 ] 00:25:07.383 }' 00:25:07.383 13:09:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.383 13:09:11 -- common/autotest_common.sh@10 -- # set +x 00:25:08.314 13:09:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:08.314 13:09:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:08.314 [2024-04-17 13:09:12.448969] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.572 13:09:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:25:08.572 13:09:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.572 13:09:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:08.830 13:09:12 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:08.830 13:09:12 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:08.830 13:09:12 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:08.830 13:09:12 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:08.830 [2024-04-17 13:09:12.840785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:08.830 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:08.830 Zero copy mechanism will not be used. 00:25:08.830 Running I/O for 60 seconds... 00:25:08.830 [2024-04-17 13:09:12.954964] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.830 [2024-04-17 13:09:12.962997] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.090 13:09:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.350 13:09:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.350 "name": "raid_bdev1", 00:25:09.350 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:09.350 "strip_size_kb": 0, 00:25:09.350 "state": "online", 00:25:09.350 "raid_level": "raid1", 00:25:09.350 "superblock": false, 00:25:09.350 "num_base_bdevs": 4, 00:25:09.350 "num_base_bdevs_discovered": 3, 00:25:09.350 "num_base_bdevs_operational": 3, 00:25:09.350 "base_bdevs_list": [ 00:25:09.350 { 00:25:09.350 "name": null, 00:25:09.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.350 "is_configured": false, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 65536 00:25:09.350 }, 00:25:09.350 { 00:25:09.350 "name": "BaseBdev2", 00:25:09.350 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:09.350 "is_configured": true, 00:25:09.350 "data_offset": 0, 00:25:09.350 "data_size": 65536 00:25:09.350 }, 00:25:09.350 { 00:25:09.350 "name": "BaseBdev3", 00:25:09.350 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:09.350 "is_configured": true, 00:25:09.350 "data_offset": 0, 00:25:09.351 "data_size": 65536 00:25:09.351 }, 00:25:09.351 { 00:25:09.351 "name": "BaseBdev4", 00:25:09.351 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:09.351 "is_configured": true, 00:25:09.351 "data_offset": 0, 00:25:09.351 "data_size": 65536 00:25:09.351 } 00:25:09.351 ] 00:25:09.351 }' 00:25:09.351 13:09:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.351 13:09:13 -- common/autotest_common.sh@10 -- # set +x 00:25:10.292 13:09:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.292 [2024-04-17 13:09:14.341454] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:10.292 [2024-04-17 13:09:14.341742] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.292 13:09:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:10.292 [2024-04-17 13:09:14.404228] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:10.292 [2024-04-17 13:09:14.406632] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:10.549 [2024-04-17 13:09:14.518979] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:10.549 [2024-04-17 13:09:14.519722] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:10.808 [2024-04-17 13:09:14.730588] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:10.808 [2024-04-17 13:09:14.731126] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:11.066 [2024-04-17 13:09:15.068365] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:11.066 [2024-04-17 13:09:15.069154] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:11.066 [2024-04-17 13:09:15.192264] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.324 13:09:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.324 [2024-04-17 13:09:15.432012] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:11.583 13:09:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.583 "name": "raid_bdev1", 00:25:11.583 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:11.583 "strip_size_kb": 0, 00:25:11.583 "state": "online", 00:25:11.583 "raid_level": "raid1", 00:25:11.583 "superblock": false, 00:25:11.583 "num_base_bdevs": 4, 00:25:11.583 "num_base_bdevs_discovered": 4, 00:25:11.583 "num_base_bdevs_operational": 4, 00:25:11.583 "process": { 00:25:11.583 "type": "rebuild", 00:25:11.583 "target": "spare", 00:25:11.583 "progress": { 00:25:11.583 "blocks": 14336, 00:25:11.583 "percent": 21 00:25:11.583 } 00:25:11.583 }, 00:25:11.583 "base_bdevs_list": [ 00:25:11.583 { 00:25:11.583 "name": "spare", 00:25:11.583 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:11.583 "is_configured": true, 00:25:11.583 "data_offset": 0, 00:25:11.583 "data_size": 65536 00:25:11.583 }, 00:25:11.583 { 00:25:11.583 "name": "BaseBdev2", 00:25:11.583 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:11.583 "is_configured": true, 00:25:11.583 "data_offset": 0, 00:25:11.583 "data_size": 65536 00:25:11.583 }, 00:25:11.583 { 00:25:11.583 "name": "BaseBdev3", 00:25:11.583 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:11.583 "is_configured": true, 00:25:11.583 "data_offset": 0, 00:25:11.583 "data_size": 65536 00:25:11.583 }, 00:25:11.583 { 00:25:11.583 "name": "BaseBdev4", 00:25:11.583 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:11.583 "is_configured": true, 00:25:11.583 "data_offset": 0, 00:25:11.583 "data_size": 65536 00:25:11.583 } 00:25:11.583 ] 00:25:11.583 }' 00:25:11.583 13:09:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.583 [2024-04-17 13:09:15.682983] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:11.583 [2024-04-17 13:09:15.683480] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:11.583 13:09:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.583 13:09:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.841 13:09:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.841 13:09:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:12.099 [2024-04-17 13:09:16.062760] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:12.099 [2024-04-17 13:09:16.073765] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:12.099 [2024-04-17 13:09:16.101760] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:12.099 [2024-04-17 13:09:16.105625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.099 [2024-04-17 13:09:16.145051] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.099 13:09:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.358 13:09:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.358 "name": "raid_bdev1", 00:25:12.358 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:12.358 "strip_size_kb": 0, 00:25:12.358 "state": "online", 00:25:12.358 "raid_level": "raid1", 00:25:12.358 "superblock": false, 00:25:12.358 "num_base_bdevs": 4, 00:25:12.358 "num_base_bdevs_discovered": 3, 00:25:12.358 "num_base_bdevs_operational": 3, 00:25:12.358 "base_bdevs_list": [ 00:25:12.358 { 00:25:12.358 "name": null, 00:25:12.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.358 "is_configured": false, 00:25:12.358 "data_offset": 0, 00:25:12.358 "data_size": 65536 00:25:12.358 }, 00:25:12.358 { 00:25:12.358 "name": "BaseBdev2", 00:25:12.358 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:12.358 "is_configured": true, 00:25:12.358 "data_offset": 0, 00:25:12.358 "data_size": 65536 00:25:12.358 }, 00:25:12.358 { 00:25:12.358 "name": "BaseBdev3", 00:25:12.358 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:12.358 "is_configured": true, 00:25:12.358 "data_offset": 0, 00:25:12.358 "data_size": 65536 00:25:12.358 }, 00:25:12.358 { 00:25:12.358 "name": "BaseBdev4", 00:25:12.358 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:12.358 "is_configured": true, 00:25:12.358 "data_offset": 0, 00:25:12.358 "data_size": 65536 00:25:12.358 } 00:25:12.358 ] 00:25:12.358 }' 00:25:12.358 13:09:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.358 13:09:16 -- common/autotest_common.sh@10 -- # set +x 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.292 "name": "raid_bdev1", 00:25:13.292 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:13.292 "strip_size_kb": 0, 00:25:13.292 "state": "online", 00:25:13.292 "raid_level": "raid1", 00:25:13.292 "superblock": false, 00:25:13.292 "num_base_bdevs": 4, 00:25:13.292 "num_base_bdevs_discovered": 3, 00:25:13.292 "num_base_bdevs_operational": 3, 00:25:13.292 "base_bdevs_list": [ 00:25:13.292 { 00:25:13.292 "name": null, 00:25:13.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.292 "is_configured": false, 00:25:13.292 "data_offset": 0, 00:25:13.292 "data_size": 65536 00:25:13.292 }, 00:25:13.292 { 00:25:13.292 "name": "BaseBdev2", 00:25:13.292 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:13.292 "is_configured": true, 00:25:13.292 "data_offset": 0, 00:25:13.292 "data_size": 65536 00:25:13.292 }, 00:25:13.292 { 00:25:13.292 "name": "BaseBdev3", 00:25:13.292 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:13.292 "is_configured": true, 00:25:13.292 "data_offset": 0, 00:25:13.292 "data_size": 65536 00:25:13.292 }, 00:25:13.292 { 00:25:13.292 "name": "BaseBdev4", 00:25:13.292 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:13.292 "is_configured": true, 00:25:13.292 "data_offset": 0, 00:25:13.292 "data_size": 65536 00:25:13.292 } 00:25:13.292 ] 00:25:13.292 }' 00:25:13.292 13:09:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.551 13:09:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:13.551 13:09:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.551 13:09:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:13.551 13:09:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:13.811 [2024-04-17 13:09:17.732607] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:13.811 [2024-04-17 13:09:17.732935] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:13.811 13:09:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:13.811 [2024-04-17 13:09:17.796070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:13.811 [2024-04-17 13:09:17.798583] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:13.811 [2024-04-17 13:09:17.909851] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:13.811 [2024-04-17 13:09:17.910660] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:14.069 [2024-04-17 13:09:18.050735] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:14.069 [2024-04-17 13:09:18.051800] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:14.327 [2024-04-17 13:09:18.459737] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:14.327 [2024-04-17 13:09:18.460461] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:14.586 [2024-04-17 13:09:18.693318] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.844 13:09:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.102 [2024-04-17 13:09:19.048001] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:15.102 [2024-04-17 13:09:19.048795] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.102 "name": "raid_bdev1", 00:25:15.102 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:15.102 "strip_size_kb": 0, 00:25:15.102 "state": "online", 00:25:15.102 "raid_level": "raid1", 00:25:15.102 "superblock": false, 00:25:15.102 "num_base_bdevs": 4, 00:25:15.102 "num_base_bdevs_discovered": 4, 00:25:15.102 "num_base_bdevs_operational": 4, 00:25:15.102 "process": { 00:25:15.102 "type": "rebuild", 00:25:15.102 "target": "spare", 00:25:15.102 "progress": { 00:25:15.102 "blocks": 14336, 00:25:15.102 "percent": 21 00:25:15.102 } 00:25:15.102 }, 00:25:15.102 "base_bdevs_list": [ 00:25:15.102 { 00:25:15.102 "name": "spare", 00:25:15.102 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:15.102 "is_configured": true, 00:25:15.102 "data_offset": 0, 00:25:15.102 "data_size": 65536 00:25:15.102 }, 00:25:15.102 { 00:25:15.102 "name": "BaseBdev2", 00:25:15.102 "uuid": "14b22665-0d3b-4c33-a27c-19d464e111c5", 00:25:15.102 "is_configured": true, 00:25:15.102 "data_offset": 0, 00:25:15.102 "data_size": 65536 00:25:15.102 }, 00:25:15.102 { 00:25:15.102 "name": "BaseBdev3", 00:25:15.102 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:15.102 "is_configured": true, 00:25:15.102 "data_offset": 0, 00:25:15.102 "data_size": 65536 00:25:15.102 }, 00:25:15.102 { 00:25:15.102 "name": "BaseBdev4", 00:25:15.102 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:15.102 "is_configured": true, 00:25:15.102 "data_offset": 0, 00:25:15.102 "data_size": 65536 00:25:15.102 } 00:25:15.102 ] 00:25:15.102 }' 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.102 [2024-04-17 13:09:19.159914] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:15.102 13:09:19 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:15.360 [2024-04-17 13:09:19.463046] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.360 [2024-04-17 13:09:19.465894] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:15.360 [2024-04-17 13:09:19.494516] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ad0 00:25:15.360 [2024-04-17 13:09:19.494772] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:25:15.618 [2024-04-17 13:09:19.513946] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.618 13:09:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.618 [2024-04-17 13:09:19.728704] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:15.618 [2024-04-17 13:09:19.729381] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.878 "name": "raid_bdev1", 00:25:15.878 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:15.878 "strip_size_kb": 0, 00:25:15.878 "state": "online", 00:25:15.878 "raid_level": "raid1", 00:25:15.878 "superblock": false, 00:25:15.878 "num_base_bdevs": 4, 00:25:15.878 "num_base_bdevs_discovered": 3, 00:25:15.878 "num_base_bdevs_operational": 3, 00:25:15.878 "process": { 00:25:15.878 "type": "rebuild", 00:25:15.878 "target": "spare", 00:25:15.878 "progress": { 00:25:15.878 "blocks": 22528, 00:25:15.878 "percent": 34 00:25:15.878 } 00:25:15.878 }, 00:25:15.878 "base_bdevs_list": [ 00:25:15.878 { 00:25:15.878 "name": "spare", 00:25:15.878 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:15.878 "is_configured": true, 00:25:15.878 "data_offset": 0, 00:25:15.878 "data_size": 65536 00:25:15.878 }, 00:25:15.878 { 00:25:15.878 "name": null, 00:25:15.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.878 "is_configured": false, 00:25:15.878 "data_offset": 0, 00:25:15.878 "data_size": 65536 00:25:15.878 }, 00:25:15.878 { 00:25:15.878 "name": "BaseBdev3", 00:25:15.878 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:15.878 "is_configured": true, 00:25:15.878 "data_offset": 0, 00:25:15.878 "data_size": 65536 00:25:15.878 }, 00:25:15.878 { 00:25:15.878 "name": "BaseBdev4", 00:25:15.878 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:15.878 "is_configured": true, 00:25:15.878 "data_offset": 0, 00:25:15.878 "data_size": 65536 00:25:15.878 } 00:25:15.878 ] 00:25:15.878 }' 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@657 -- # local timeout=580 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.878 13:09:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.138 13:09:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.138 "name": "raid_bdev1", 00:25:16.138 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:16.138 "strip_size_kb": 0, 00:25:16.138 "state": "online", 00:25:16.138 "raid_level": "raid1", 00:25:16.138 "superblock": false, 00:25:16.138 "num_base_bdevs": 4, 00:25:16.138 "num_base_bdevs_discovered": 3, 00:25:16.138 "num_base_bdevs_operational": 3, 00:25:16.138 "process": { 00:25:16.138 "type": "rebuild", 00:25:16.138 "target": "spare", 00:25:16.138 "progress": { 00:25:16.138 "blocks": 26624, 00:25:16.138 "percent": 40 00:25:16.138 } 00:25:16.138 }, 00:25:16.138 "base_bdevs_list": [ 00:25:16.138 { 00:25:16.138 "name": "spare", 00:25:16.138 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:16.138 "is_configured": true, 00:25:16.138 "data_offset": 0, 00:25:16.138 "data_size": 65536 00:25:16.138 }, 00:25:16.138 { 00:25:16.138 "name": null, 00:25:16.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.138 "is_configured": false, 00:25:16.138 "data_offset": 0, 00:25:16.138 "data_size": 65536 00:25:16.138 }, 00:25:16.138 { 00:25:16.138 "name": "BaseBdev3", 00:25:16.138 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:16.138 "is_configured": true, 00:25:16.138 "data_offset": 0, 00:25:16.138 "data_size": 65536 00:25:16.138 }, 00:25:16.138 { 00:25:16.138 "name": "BaseBdev4", 00:25:16.138 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:16.138 "is_configured": true, 00:25:16.138 "data_offset": 0, 00:25:16.138 "data_size": 65536 00:25:16.138 } 00:25:16.138 ] 00:25:16.138 }' 00:25:16.138 13:09:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.138 13:09:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.138 13:09:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.397 13:09:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.397 13:09:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.397 [2024-04-17 13:09:20.527142] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.334 13:09:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:17.593 "name": "raid_bdev1", 00:25:17.593 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:17.593 "strip_size_kb": 0, 00:25:17.593 "state": "online", 00:25:17.593 "raid_level": "raid1", 00:25:17.593 "superblock": false, 00:25:17.593 "num_base_bdevs": 4, 00:25:17.593 "num_base_bdevs_discovered": 3, 00:25:17.593 "num_base_bdevs_operational": 3, 00:25:17.593 "process": { 00:25:17.593 "type": "rebuild", 00:25:17.593 "target": "spare", 00:25:17.593 "progress": { 00:25:17.593 "blocks": 51200, 00:25:17.593 "percent": 78 00:25:17.593 } 00:25:17.593 }, 00:25:17.593 "base_bdevs_list": [ 00:25:17.593 { 00:25:17.593 "name": "spare", 00:25:17.593 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:17.593 "is_configured": true, 00:25:17.593 "data_offset": 0, 00:25:17.593 "data_size": 65536 00:25:17.593 }, 00:25:17.593 { 00:25:17.593 "name": null, 00:25:17.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.593 "is_configured": false, 00:25:17.593 "data_offset": 0, 00:25:17.593 "data_size": 65536 00:25:17.593 }, 00:25:17.593 { 00:25:17.593 "name": "BaseBdev3", 00:25:17.593 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:17.593 "is_configured": true, 00:25:17.593 "data_offset": 0, 00:25:17.593 "data_size": 65536 00:25:17.593 }, 00:25:17.593 { 00:25:17.593 "name": "BaseBdev4", 00:25:17.593 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:17.593 "is_configured": true, 00:25:17.593 "data_offset": 0, 00:25:17.593 "data_size": 65536 00:25:17.593 } 00:25:17.593 ] 00:25:17.593 }' 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.593 13:09:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:18.161 [2024-04-17 13:09:22.004158] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:18.420 [2024-04-17 13:09:22.330082] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:18.420 [2024-04-17 13:09:22.438062] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:18.420 [2024-04-17 13:09:22.442315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.679 13:09:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.938 13:09:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.938 "name": "raid_bdev1", 00:25:18.938 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:18.938 "strip_size_kb": 0, 00:25:18.938 "state": "online", 00:25:18.938 "raid_level": "raid1", 00:25:18.938 "superblock": false, 00:25:18.938 "num_base_bdevs": 4, 00:25:18.938 "num_base_bdevs_discovered": 3, 00:25:18.938 "num_base_bdevs_operational": 3, 00:25:18.938 "base_bdevs_list": [ 00:25:18.938 { 00:25:18.938 "name": "spare", 00:25:18.938 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:18.938 "is_configured": true, 00:25:18.938 "data_offset": 0, 00:25:18.938 "data_size": 65536 00:25:18.938 }, 00:25:18.938 { 00:25:18.938 "name": null, 00:25:18.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.938 "is_configured": false, 00:25:18.938 "data_offset": 0, 00:25:18.938 "data_size": 65536 00:25:18.938 }, 00:25:18.938 { 00:25:18.938 "name": "BaseBdev3", 00:25:18.938 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:18.938 "is_configured": true, 00:25:18.938 "data_offset": 0, 00:25:18.938 "data_size": 65536 00:25:18.938 }, 00:25:18.938 { 00:25:18.938 "name": "BaseBdev4", 00:25:18.938 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:18.938 "is_configured": true, 00:25:18.938 "data_offset": 0, 00:25:18.938 "data_size": 65536 00:25:18.938 } 00:25:18.938 ] 00:25:18.938 }' 00:25:18.939 13:09:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@660 -- # break 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.939 13:09:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.198 13:09:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.198 "name": "raid_bdev1", 00:25:19.198 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:19.198 "strip_size_kb": 0, 00:25:19.198 "state": "online", 00:25:19.198 "raid_level": "raid1", 00:25:19.198 "superblock": false, 00:25:19.198 "num_base_bdevs": 4, 00:25:19.198 "num_base_bdevs_discovered": 3, 00:25:19.198 "num_base_bdevs_operational": 3, 00:25:19.198 "base_bdevs_list": [ 00:25:19.198 { 00:25:19.198 "name": "spare", 00:25:19.198 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:19.198 "is_configured": true, 00:25:19.198 "data_offset": 0, 00:25:19.198 "data_size": 65536 00:25:19.198 }, 00:25:19.198 { 00:25:19.198 "name": null, 00:25:19.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.198 "is_configured": false, 00:25:19.198 "data_offset": 0, 00:25:19.198 "data_size": 65536 00:25:19.198 }, 00:25:19.198 { 00:25:19.198 "name": "BaseBdev3", 00:25:19.198 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:19.198 "is_configured": true, 00:25:19.198 "data_offset": 0, 00:25:19.198 "data_size": 65536 00:25:19.198 }, 00:25:19.198 { 00:25:19.198 "name": "BaseBdev4", 00:25:19.198 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:19.198 "is_configured": true, 00:25:19.198 "data_offset": 0, 00:25:19.198 "data_size": 65536 00:25:19.198 } 00:25:19.198 ] 00:25:19.198 }' 00:25:19.198 13:09:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.459 13:09:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.768 13:09:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.768 "name": "raid_bdev1", 00:25:19.768 "uuid": "73be44e0-733b-49a4-aa95-48b402aa3bee", 00:25:19.768 "strip_size_kb": 0, 00:25:19.768 "state": "online", 00:25:19.768 "raid_level": "raid1", 00:25:19.768 "superblock": false, 00:25:19.768 "num_base_bdevs": 4, 00:25:19.768 "num_base_bdevs_discovered": 3, 00:25:19.768 "num_base_bdevs_operational": 3, 00:25:19.768 "base_bdevs_list": [ 00:25:19.768 { 00:25:19.768 "name": "spare", 00:25:19.768 "uuid": "1b0141bb-c491-5041-88c0-0dd67841d8d5", 00:25:19.768 "is_configured": true, 00:25:19.768 "data_offset": 0, 00:25:19.768 "data_size": 65536 00:25:19.768 }, 00:25:19.768 { 00:25:19.768 "name": null, 00:25:19.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.768 "is_configured": false, 00:25:19.768 "data_offset": 0, 00:25:19.768 "data_size": 65536 00:25:19.768 }, 00:25:19.768 { 00:25:19.768 "name": "BaseBdev3", 00:25:19.768 "uuid": "6c061d0f-6d49-45e7-b8c4-196dc67f2c15", 00:25:19.768 "is_configured": true, 00:25:19.768 "data_offset": 0, 00:25:19.768 "data_size": 65536 00:25:19.768 }, 00:25:19.768 { 00:25:19.768 "name": "BaseBdev4", 00:25:19.768 "uuid": "e9381060-fec2-443c-9b04-57c6df9a5823", 00:25:19.768 "is_configured": true, 00:25:19.768 "data_offset": 0, 00:25:19.768 "data_size": 65536 00:25:19.768 } 00:25:19.768 ] 00:25:19.768 }' 00:25:19.768 13:09:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.768 13:09:23 -- common/autotest_common.sh@10 -- # set +x 00:25:20.704 13:09:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.704 [2024-04-17 13:09:24.742274] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.704 [2024-04-17 13:09:24.742514] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.704 00:25:20.705 Latency(us) 00:25:20.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.705 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:20.705 raid_bdev1 : 11.92 96.32 288.96 0.00 0.00 14225.36 335.13 132501.88 00:25:20.705 =================================================================================================================== 00:25:20.705 Total : 96.32 288.96 0.00 0.00 14225.36 335.13 132501.88 00:25:20.705 [2024-04-17 13:09:24.781932] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.705 0 00:25:20.705 [2024-04-17 13:09:24.782238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.705 [2024-04-17 13:09:24.782447] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.705 [2024-04-17 13:09:24.782567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:25:20.705 13:09:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.705 13:09:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:20.963 13:09:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:20.963 13:09:25 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:20.963 13:09:25 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@12 -- # local i 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:20.963 13:09:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:21.221 /dev/nbd0 00:25:21.221 13:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:21.479 13:09:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:21.479 13:09:25 -- common/autotest_common.sh@855 -- # local i 00:25:21.479 13:09:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:21.479 13:09:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:21.479 13:09:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:21.479 13:09:25 -- common/autotest_common.sh@859 -- # break 00:25:21.479 13:09:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:21.479 13:09:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:21.479 13:09:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.479 1+0 records in 00:25:21.479 1+0 records out 00:25:21.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771168 s, 5.3 MB/s 00:25:21.479 13:09:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.479 13:09:25 -- common/autotest_common.sh@872 -- # size=4096 00:25:21.479 13:09:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.479 13:09:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:21.479 13:09:25 -- common/autotest_common.sh@875 -- # return 0 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@678 -- # continue 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:21.479 13:09:25 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@12 -- # local i 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.479 13:09:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:21.738 /dev/nbd1 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:21.738 13:09:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:21.738 13:09:25 -- common/autotest_common.sh@855 -- # local i 00:25:21.738 13:09:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:21.738 13:09:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:21.738 13:09:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:21.738 13:09:25 -- common/autotest_common.sh@859 -- # break 00:25:21.738 13:09:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:21.738 13:09:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:21.738 13:09:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.738 1+0 records in 00:25:21.738 1+0 records out 00:25:21.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465906 s, 8.8 MB/s 00:25:21.738 13:09:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.738 13:09:25 -- common/autotest_common.sh@872 -- # size=4096 00:25:21.738 13:09:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.738 13:09:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:21.738 13:09:25 -- common/autotest_common.sh@875 -- # return 0 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:21.738 13:09:25 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:21.738 13:09:25 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@51 -- # local i 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.738 13:09:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:21.996 13:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@41 -- # break 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.254 13:09:26 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:22.254 13:09:26 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:22.254 13:09:26 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@12 -- # local i 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:22.254 13:09:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:22.513 /dev/nbd1 00:25:22.513 13:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:22.513 13:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:22.513 13:09:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:22.513 13:09:26 -- common/autotest_common.sh@855 -- # local i 00:25:22.513 13:09:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:22.513 13:09:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:22.513 13:09:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:22.513 13:09:26 -- common/autotest_common.sh@859 -- # break 00:25:22.513 13:09:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:22.513 13:09:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:22.513 13:09:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:22.513 1+0 records in 00:25:22.513 1+0 records out 00:25:22.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609924 s, 6.7 MB/s 00:25:22.513 13:09:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.513 13:09:26 -- common/autotest_common.sh@872 -- # size=4096 00:25:22.513 13:09:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.513 13:09:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:22.513 13:09:26 -- common/autotest_common.sh@875 -- # return 0 00:25:22.513 13:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:22.513 13:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:22.513 13:09:26 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:22.772 13:09:26 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@51 -- # local i 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:22.772 13:09:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:23.030 13:09:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@41 -- # break 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@45 -- # return 0 00:25:23.030 13:09:27 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@51 -- # local i 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:23.030 13:09:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@41 -- # break 00:25:23.289 13:09:27 -- bdev/nbd_common.sh@45 -- # return 0 00:25:23.289 13:09:27 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:23.289 13:09:27 -- bdev/bdev_raid.sh@709 -- # killprocess 134382 00:25:23.289 13:09:27 -- common/autotest_common.sh@924 -- # '[' -z 134382 ']' 00:25:23.289 13:09:27 -- common/autotest_common.sh@928 -- # kill -0 134382 00:25:23.289 13:09:27 -- common/autotest_common.sh@929 -- # uname 00:25:23.289 13:09:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:23.289 13:09:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 134382 00:25:23.289 13:09:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:23.289 13:09:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:23.289 13:09:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 134382' 00:25:23.289 killing process with pid 134382 00:25:23.289 13:09:27 -- common/autotest_common.sh@943 -- # kill 134382 00:25:23.289 Received shutdown signal, test time was about 14.526395 seconds 00:25:23.289 00:25:23.289 Latency(us) 00:25:23.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.289 =================================================================================================================== 00:25:23.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.289 [2024-04-17 13:09:27.369895] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.289 13:09:27 -- common/autotest_common.sh@948 -- # wait 134382 00:25:23.856 [2024-04-17 13:09:27.745104] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.235 ************************************ 00:25:25.235 END TEST raid_rebuild_test_io 00:25:25.235 ************************************ 00:25:25.235 13:09:28 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:25.235 00:25:25.235 real 0m20.864s 00:25:25.235 user 0m33.019s 00:25:25.235 sys 0m2.366s 00:25:25.235 13:09:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:25:25.235 13:09:28 -- common/autotest_common.sh@10 -- # set +x 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:25:25.235 13:09:29 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:25:25.235 13:09:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:25.235 13:09:29 -- common/autotest_common.sh@10 -- # set +x 00:25:25.235 ************************************ 00:25:25.235 START TEST raid_rebuild_test_sb_io 00:25:25.235 ************************************ 00:25:25.235 13:09:29 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid1 4 true true 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@544 -- # raid_pid=134951 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134951 /var/tmp/spdk-raid.sock 00:25:25.235 13:09:29 -- common/autotest_common.sh@817 -- # '[' -z 134951 ']' 00:25:25.235 13:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:25.235 13:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.235 13:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:25.235 13:09:29 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:25.235 13:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.235 13:09:29 -- common/autotest_common.sh@10 -- # set +x 00:25:25.235 [2024-04-17 13:09:29.110173] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:25:25.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:25.235 Zero copy mechanism will not be used. 00:25:25.235 [2024-04-17 13:09:29.110345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134951 ] 00:25:25.235 [2024-04-17 13:09:29.268046] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.494 [2024-04-17 13:09:29.502259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.752 [2024-04-17 13:09:29.698866] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.009 13:09:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.009 13:09:30 -- common/autotest_common.sh@850 -- # return 0 00:25:26.009 13:09:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.009 13:09:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:26.009 13:09:30 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:26.268 BaseBdev1_malloc 00:25:26.268 13:09:30 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:26.526 [2024-04-17 13:09:30.662186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:26.526 [2024-04-17 13:09:30.662304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.526 [2024-04-17 13:09:30.662343] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:26.526 [2024-04-17 13:09:30.662395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.526 [2024-04-17 13:09:30.665003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.526 [2024-04-17 13:09:30.665065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:26.526 BaseBdev1 00:25:26.786 13:09:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.786 13:09:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:26.786 13:09:30 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:27.084 BaseBdev2_malloc 00:25:27.084 13:09:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:27.342 [2024-04-17 13:09:31.270231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:27.342 [2024-04-17 13:09:31.270381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.342 [2024-04-17 13:09:31.270451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:27.342 [2024-04-17 13:09:31.270533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.342 [2024-04-17 13:09:31.273247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.342 [2024-04-17 13:09:31.273304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:27.342 BaseBdev2 00:25:27.342 13:09:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:27.342 13:09:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:27.343 13:09:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:27.601 BaseBdev3_malloc 00:25:27.601 13:09:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:27.860 [2024-04-17 13:09:31.773504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:27.860 [2024-04-17 13:09:31.773608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.860 [2024-04-17 13:09:31.773665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:27.860 [2024-04-17 13:09:31.773709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.860 [2024-04-17 13:09:31.776240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.860 [2024-04-17 13:09:31.776300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:27.860 BaseBdev3 00:25:27.860 13:09:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:27.860 13:09:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:27.860 13:09:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:28.118 BaseBdev4_malloc 00:25:28.118 13:09:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:28.376 [2024-04-17 13:09:32.321289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:28.376 [2024-04-17 13:09:32.321405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.376 [2024-04-17 13:09:32.321448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:28.376 [2024-04-17 13:09:32.321498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.376 [2024-04-17 13:09:32.324137] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.376 [2024-04-17 13:09:32.324199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:28.376 BaseBdev4 00:25:28.376 13:09:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:28.634 spare_malloc 00:25:28.634 13:09:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:28.892 spare_delay 00:25:28.892 13:09:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:29.150 [2024-04-17 13:09:33.162447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:29.150 [2024-04-17 13:09:33.162572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.150 [2024-04-17 13:09:33.162613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:29.150 [2024-04-17 13:09:33.162662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.150 [2024-04-17 13:09:33.165282] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.150 [2024-04-17 13:09:33.165351] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:29.150 spare 00:25:29.150 13:09:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:29.409 [2024-04-17 13:09:33.490631] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.409 [2024-04-17 13:09:33.492983] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.409 [2024-04-17 13:09:33.493089] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:29.409 [2024-04-17 13:09:33.493156] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:29.409 [2024-04-17 13:09:33.493410] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:29.409 [2024-04-17 13:09:33.493437] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:29.409 [2024-04-17 13:09:33.493586] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:29.409 [2024-04-17 13:09:33.494016] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:29.409 [2024-04-17 13:09:33.494041] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:29.409 [2024-04-17 13:09:33.494230] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.409 13:09:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.668 13:09:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.668 "name": "raid_bdev1", 00:25:29.668 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:29.668 "strip_size_kb": 0, 00:25:29.668 "state": "online", 00:25:29.668 "raid_level": "raid1", 00:25:29.668 "superblock": true, 00:25:29.668 "num_base_bdevs": 4, 00:25:29.668 "num_base_bdevs_discovered": 4, 00:25:29.668 "num_base_bdevs_operational": 4, 00:25:29.668 "base_bdevs_list": [ 00:25:29.668 { 00:25:29.668 "name": "BaseBdev1", 00:25:29.668 "uuid": "76b29569-863e-525f-9730-ae9e3ce70463", 00:25:29.668 "is_configured": true, 00:25:29.668 "data_offset": 2048, 00:25:29.668 "data_size": 63488 00:25:29.668 }, 00:25:29.668 { 00:25:29.668 "name": "BaseBdev2", 00:25:29.668 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:29.668 "is_configured": true, 00:25:29.668 "data_offset": 2048, 00:25:29.668 "data_size": 63488 00:25:29.668 }, 00:25:29.668 { 00:25:29.668 "name": "BaseBdev3", 00:25:29.668 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:29.668 "is_configured": true, 00:25:29.668 "data_offset": 2048, 00:25:29.668 "data_size": 63488 00:25:29.668 }, 00:25:29.668 { 00:25:29.668 "name": "BaseBdev4", 00:25:29.668 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:29.668 "is_configured": true, 00:25:29.668 "data_offset": 2048, 00:25:29.668 "data_size": 63488 00:25:29.668 } 00:25:29.668 ] 00:25:29.668 }' 00:25:29.668 13:09:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.668 13:09:33 -- common/autotest_common.sh@10 -- # set +x 00:25:30.612 13:09:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:30.612 13:09:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:30.612 [2024-04-17 13:09:34.731218] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.612 13:09:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:25:30.612 13:09:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.612 13:09:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:31.180 13:09:35 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:31.180 13:09:35 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:31.180 13:09:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:31.180 13:09:35 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:31.180 [2024-04-17 13:09:35.127298] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:31.180 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:31.180 Zero copy mechanism will not be used. 00:25:31.180 Running I/O for 60 seconds... 00:25:31.180 [2024-04-17 13:09:35.288214] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:31.180 [2024-04-17 13:09:35.304577] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.440 13:09:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.700 13:09:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.700 "name": "raid_bdev1", 00:25:31.700 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:31.700 "strip_size_kb": 0, 00:25:31.700 "state": "online", 00:25:31.700 "raid_level": "raid1", 00:25:31.700 "superblock": true, 00:25:31.700 "num_base_bdevs": 4, 00:25:31.700 "num_base_bdevs_discovered": 3, 00:25:31.700 "num_base_bdevs_operational": 3, 00:25:31.700 "base_bdevs_list": [ 00:25:31.700 { 00:25:31.700 "name": null, 00:25:31.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.700 "is_configured": false, 00:25:31.700 "data_offset": 2048, 00:25:31.700 "data_size": 63488 00:25:31.700 }, 00:25:31.700 { 00:25:31.700 "name": "BaseBdev2", 00:25:31.700 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:31.700 "is_configured": true, 00:25:31.700 "data_offset": 2048, 00:25:31.700 "data_size": 63488 00:25:31.700 }, 00:25:31.700 { 00:25:31.700 "name": "BaseBdev3", 00:25:31.700 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:31.700 "is_configured": true, 00:25:31.700 "data_offset": 2048, 00:25:31.700 "data_size": 63488 00:25:31.700 }, 00:25:31.700 { 00:25:31.701 "name": "BaseBdev4", 00:25:31.701 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:31.701 "is_configured": true, 00:25:31.701 "data_offset": 2048, 00:25:31.701 "data_size": 63488 00:25:31.701 } 00:25:31.701 ] 00:25:31.701 }' 00:25:31.701 13:09:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.701 13:09:35 -- common/autotest_common.sh@10 -- # set +x 00:25:32.636 13:09:36 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:32.636 [2024-04-17 13:09:36.734824] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:32.636 [2024-04-17 13:09:36.734901] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:32.894 13:09:36 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:32.894 [2024-04-17 13:09:36.809234] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:32.894 [2024-04-17 13:09:36.811762] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:32.894 [2024-04-17 13:09:36.923350] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:32.894 [2024-04-17 13:09:36.923965] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:33.153 [2024-04-17 13:09:37.138135] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:33.153 [2024-04-17 13:09:37.138966] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:33.720 [2024-04-17 13:09:37.614780] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:33.720 [2024-04-17 13:09:37.615583] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.720 13:09:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.980 13:09:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.980 "name": "raid_bdev1", 00:25:33.980 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:33.980 "strip_size_kb": 0, 00:25:33.980 "state": "online", 00:25:33.980 "raid_level": "raid1", 00:25:33.980 "superblock": true, 00:25:33.980 "num_base_bdevs": 4, 00:25:33.980 "num_base_bdevs_discovered": 4, 00:25:33.980 "num_base_bdevs_operational": 4, 00:25:33.980 "process": { 00:25:33.980 "type": "rebuild", 00:25:33.980 "target": "spare", 00:25:33.980 "progress": { 00:25:33.980 "blocks": 14336, 00:25:33.980 "percent": 22 00:25:33.980 } 00:25:33.980 }, 00:25:33.980 "base_bdevs_list": [ 00:25:33.980 { 00:25:33.980 "name": "spare", 00:25:33.980 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:33.980 "is_configured": true, 00:25:33.980 "data_offset": 2048, 00:25:33.980 "data_size": 63488 00:25:33.980 }, 00:25:33.980 { 00:25:33.980 "name": "BaseBdev2", 00:25:33.980 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:33.980 "is_configured": true, 00:25:33.980 "data_offset": 2048, 00:25:33.980 "data_size": 63488 00:25:33.980 }, 00:25:33.980 { 00:25:33.980 "name": "BaseBdev3", 00:25:33.980 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:33.980 "is_configured": true, 00:25:33.980 "data_offset": 2048, 00:25:33.980 "data_size": 63488 00:25:33.980 }, 00:25:33.980 { 00:25:33.980 "name": "BaseBdev4", 00:25:33.980 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:33.980 "is_configured": true, 00:25:33.980 "data_offset": 2048, 00:25:33.980 "data_size": 63488 00:25:33.980 } 00:25:33.980 ] 00:25:33.980 }' 00:25:33.980 13:09:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.980 13:09:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.980 13:09:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.239 13:09:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.239 13:09:38 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:34.239 [2024-04-17 13:09:38.313646] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:34.497 [2024-04-17 13:09:38.455915] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:34.498 [2024-04-17 13:09:38.538811] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:34.498 [2024-04-17 13:09:38.565404] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:34.498 [2024-04-17 13:09:38.579027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.498 [2024-04-17 13:09:38.613246] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:25:34.756 13:09:38 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:34.756 13:09:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.757 13:09:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.015 13:09:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:35.015 "name": "raid_bdev1", 00:25:35.015 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:35.015 "strip_size_kb": 0, 00:25:35.015 "state": "online", 00:25:35.015 "raid_level": "raid1", 00:25:35.015 "superblock": true, 00:25:35.015 "num_base_bdevs": 4, 00:25:35.015 "num_base_bdevs_discovered": 3, 00:25:35.015 "num_base_bdevs_operational": 3, 00:25:35.015 "base_bdevs_list": [ 00:25:35.015 { 00:25:35.015 "name": null, 00:25:35.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.015 "is_configured": false, 00:25:35.015 "data_offset": 2048, 00:25:35.015 "data_size": 63488 00:25:35.015 }, 00:25:35.015 { 00:25:35.015 "name": "BaseBdev2", 00:25:35.015 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:35.015 "is_configured": true, 00:25:35.015 "data_offset": 2048, 00:25:35.015 "data_size": 63488 00:25:35.015 }, 00:25:35.015 { 00:25:35.015 "name": "BaseBdev3", 00:25:35.015 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:35.015 "is_configured": true, 00:25:35.015 "data_offset": 2048, 00:25:35.015 "data_size": 63488 00:25:35.015 }, 00:25:35.016 { 00:25:35.016 "name": "BaseBdev4", 00:25:35.016 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:35.016 "is_configured": true, 00:25:35.016 "data_offset": 2048, 00:25:35.016 "data_size": 63488 00:25:35.016 } 00:25:35.016 ] 00:25:35.016 }' 00:25:35.016 13:09:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:35.016 13:09:38 -- common/autotest_common.sh@10 -- # set +x 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.585 13:09:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.153 13:09:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.153 "name": "raid_bdev1", 00:25:36.153 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:36.153 "strip_size_kb": 0, 00:25:36.153 "state": "online", 00:25:36.153 "raid_level": "raid1", 00:25:36.153 "superblock": true, 00:25:36.153 "num_base_bdevs": 4, 00:25:36.153 "num_base_bdevs_discovered": 3, 00:25:36.153 "num_base_bdevs_operational": 3, 00:25:36.153 "base_bdevs_list": [ 00:25:36.153 { 00:25:36.153 "name": null, 00:25:36.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.153 "is_configured": false, 00:25:36.153 "data_offset": 2048, 00:25:36.153 "data_size": 63488 00:25:36.153 }, 00:25:36.153 { 00:25:36.153 "name": "BaseBdev2", 00:25:36.153 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:36.153 "is_configured": true, 00:25:36.153 "data_offset": 2048, 00:25:36.153 "data_size": 63488 00:25:36.153 }, 00:25:36.153 { 00:25:36.153 "name": "BaseBdev3", 00:25:36.153 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:36.153 "is_configured": true, 00:25:36.153 "data_offset": 2048, 00:25:36.153 "data_size": 63488 00:25:36.153 }, 00:25:36.153 { 00:25:36.153 "name": "BaseBdev4", 00:25:36.153 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:36.153 "is_configured": true, 00:25:36.153 "data_offset": 2048, 00:25:36.153 "data_size": 63488 00:25:36.153 } 00:25:36.153 ] 00:25:36.153 }' 00:25:36.153 13:09:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.153 13:09:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:36.154 13:09:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.154 13:09:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:36.154 13:09:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:36.413 [2024-04-17 13:09:40.366643] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:36.413 [2024-04-17 13:09:40.366705] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:36.413 13:09:40 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:36.413 [2024-04-17 13:09:40.422802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:36.413 [2024-04-17 13:09:40.424927] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:36.413 [2024-04-17 13:09:40.546108] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:36.413 [2024-04-17 13:09:40.546757] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:36.672 [2024-04-17 13:09:40.676469] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:36.672 [2024-04-17 13:09:40.676825] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:37.240 [2024-04-17 13:09:41.177681] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.517 13:09:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.517 [2024-04-17 13:09:41.437913] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:37.781 [2024-04-17 13:09:41.679340] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:37.781 [2024-04-17 13:09:41.679656] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:37.781 "name": "raid_bdev1", 00:25:37.781 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:37.781 "strip_size_kb": 0, 00:25:37.781 "state": "online", 00:25:37.781 "raid_level": "raid1", 00:25:37.781 "superblock": true, 00:25:37.781 "num_base_bdevs": 4, 00:25:37.781 "num_base_bdevs_discovered": 4, 00:25:37.781 "num_base_bdevs_operational": 4, 00:25:37.781 "process": { 00:25:37.781 "type": "rebuild", 00:25:37.781 "target": "spare", 00:25:37.781 "progress": { 00:25:37.781 "blocks": 16384, 00:25:37.781 "percent": 25 00:25:37.781 } 00:25:37.781 }, 00:25:37.781 "base_bdevs_list": [ 00:25:37.781 { 00:25:37.781 "name": "spare", 00:25:37.781 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:37.781 "is_configured": true, 00:25:37.781 "data_offset": 2048, 00:25:37.781 "data_size": 63488 00:25:37.781 }, 00:25:37.781 { 00:25:37.781 "name": "BaseBdev2", 00:25:37.781 "uuid": "6a0d5f61-2321-52b5-bb73-2a7a816d2546", 00:25:37.781 "is_configured": true, 00:25:37.781 "data_offset": 2048, 00:25:37.781 "data_size": 63488 00:25:37.781 }, 00:25:37.781 { 00:25:37.781 "name": "BaseBdev3", 00:25:37.781 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:37.781 "is_configured": true, 00:25:37.781 "data_offset": 2048, 00:25:37.781 "data_size": 63488 00:25:37.781 }, 00:25:37.781 { 00:25:37.781 "name": "BaseBdev4", 00:25:37.781 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:37.781 "is_configured": true, 00:25:37.781 "data_offset": 2048, 00:25:37.781 "data_size": 63488 00:25:37.781 } 00:25:37.781 ] 00:25:37.781 }' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:37.781 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:37.781 13:09:41 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:38.039 [2024-04-17 13:09:41.943485] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:38.039 [2024-04-17 13:09:41.952583] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:38.039 [2024-04-17 13:09:42.104738] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:38.297 [2024-04-17 13:09:42.196728] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005e10 00:25:38.297 [2024-04-17 13:09:42.196800] bdev_raid.c:1969:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:25:38.297 [2024-04-17 13:09:42.196848] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.297 13:09:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.556 [2024-04-17 13:09:42.458436] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:38.556 [2024-04-17 13:09:42.560027] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:38.556 [2024-04-17 13:09:42.560386] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:25:38.556 13:09:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.556 "name": "raid_bdev1", 00:25:38.556 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:38.556 "strip_size_kb": 0, 00:25:38.556 "state": "online", 00:25:38.556 "raid_level": "raid1", 00:25:38.556 "superblock": true, 00:25:38.556 "num_base_bdevs": 4, 00:25:38.556 "num_base_bdevs_discovered": 3, 00:25:38.556 "num_base_bdevs_operational": 3, 00:25:38.556 "process": { 00:25:38.556 "type": "rebuild", 00:25:38.556 "target": "spare", 00:25:38.556 "progress": { 00:25:38.556 "blocks": 28672, 00:25:38.556 "percent": 45 00:25:38.556 } 00:25:38.556 }, 00:25:38.556 "base_bdevs_list": [ 00:25:38.556 { 00:25:38.556 "name": "spare", 00:25:38.556 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:38.556 "is_configured": true, 00:25:38.556 "data_offset": 2048, 00:25:38.556 "data_size": 63488 00:25:38.556 }, 00:25:38.556 { 00:25:38.556 "name": null, 00:25:38.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.556 "is_configured": false, 00:25:38.556 "data_offset": 2048, 00:25:38.556 "data_size": 63488 00:25:38.556 }, 00:25:38.556 { 00:25:38.556 "name": "BaseBdev3", 00:25:38.556 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:38.556 "is_configured": true, 00:25:38.556 "data_offset": 2048, 00:25:38.556 "data_size": 63488 00:25:38.556 }, 00:25:38.557 { 00:25:38.557 "name": "BaseBdev4", 00:25:38.557 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:38.557 "is_configured": true, 00:25:38.557 "data_offset": 2048, 00:25:38.557 "data_size": 63488 00:25:38.557 } 00:25:38.557 ] 00:25:38.557 }' 00:25:38.557 13:09:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.557 13:09:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.557 13:09:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@657 -- # local timeout=603 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.816 13:09:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.816 [2024-04-17 13:09:42.793425] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:38.816 [2024-04-17 13:09:42.902687] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:38.816 [2024-04-17 13:09:42.903236] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.074 "name": "raid_bdev1", 00:25:39.074 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:39.074 "strip_size_kb": 0, 00:25:39.074 "state": "online", 00:25:39.074 "raid_level": "raid1", 00:25:39.074 "superblock": true, 00:25:39.074 "num_base_bdevs": 4, 00:25:39.074 "num_base_bdevs_discovered": 3, 00:25:39.074 "num_base_bdevs_operational": 3, 00:25:39.074 "process": { 00:25:39.074 "type": "rebuild", 00:25:39.074 "target": "spare", 00:25:39.074 "progress": { 00:25:39.074 "blocks": 34816, 00:25:39.074 "percent": 54 00:25:39.074 } 00:25:39.074 }, 00:25:39.074 "base_bdevs_list": [ 00:25:39.074 { 00:25:39.074 "name": "spare", 00:25:39.074 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:39.074 "is_configured": true, 00:25:39.074 "data_offset": 2048, 00:25:39.074 "data_size": 63488 00:25:39.074 }, 00:25:39.074 { 00:25:39.074 "name": null, 00:25:39.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.074 "is_configured": false, 00:25:39.074 "data_offset": 2048, 00:25:39.074 "data_size": 63488 00:25:39.074 }, 00:25:39.074 { 00:25:39.074 "name": "BaseBdev3", 00:25:39.074 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:39.074 "is_configured": true, 00:25:39.074 "data_offset": 2048, 00:25:39.074 "data_size": 63488 00:25:39.074 }, 00:25:39.074 { 00:25:39.074 "name": "BaseBdev4", 00:25:39.074 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:39.074 "is_configured": true, 00:25:39.074 "data_offset": 2048, 00:25:39.074 "data_size": 63488 00:25:39.074 } 00:25:39.074 ] 00:25:39.074 }' 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.074 13:09:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:39.379 [2024-04-17 13:09:43.275522] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:39.945 [2024-04-17 13:09:44.085424] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:40.202 13:09:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:40.202 13:09:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.203 13:09:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.203 [2024-04-17 13:09:44.295374] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.461 "name": "raid_bdev1", 00:25:40.461 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:40.461 "strip_size_kb": 0, 00:25:40.461 "state": "online", 00:25:40.461 "raid_level": "raid1", 00:25:40.461 "superblock": true, 00:25:40.461 "num_base_bdevs": 4, 00:25:40.461 "num_base_bdevs_discovered": 3, 00:25:40.461 "num_base_bdevs_operational": 3, 00:25:40.461 "process": { 00:25:40.461 "type": "rebuild", 00:25:40.461 "target": "spare", 00:25:40.461 "progress": { 00:25:40.461 "blocks": 53248, 00:25:40.461 "percent": 83 00:25:40.461 } 00:25:40.461 }, 00:25:40.461 "base_bdevs_list": [ 00:25:40.461 { 00:25:40.461 "name": "spare", 00:25:40.461 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 2048, 00:25:40.461 "data_size": 63488 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": null, 00:25:40.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.461 "is_configured": false, 00:25:40.461 "data_offset": 2048, 00:25:40.461 "data_size": 63488 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": "BaseBdev3", 00:25:40.461 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 2048, 00:25:40.461 "data_size": 63488 00:25:40.461 }, 00:25:40.461 { 00:25:40.461 "name": "BaseBdev4", 00:25:40.461 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:40.461 "is_configured": true, 00:25:40.461 "data_offset": 2048, 00:25:40.461 "data_size": 63488 00:25:40.461 } 00:25:40.461 ] 00:25:40.461 }' 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.461 13:09:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:40.719 [2024-04-17 13:09:44.635393] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:25:40.977 [2024-04-17 13:09:45.076701] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:41.235 [2024-04-17 13:09:45.176714] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:41.235 [2024-04-17 13:09:45.179141] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.493 13:09:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.752 13:09:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.752 "name": "raid_bdev1", 00:25:41.752 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:41.752 "strip_size_kb": 0, 00:25:41.752 "state": "online", 00:25:41.752 "raid_level": "raid1", 00:25:41.752 "superblock": true, 00:25:41.752 "num_base_bdevs": 4, 00:25:41.752 "num_base_bdevs_discovered": 3, 00:25:41.752 "num_base_bdevs_operational": 3, 00:25:41.752 "base_bdevs_list": [ 00:25:41.752 { 00:25:41.752 "name": "spare", 00:25:41.752 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:41.752 "is_configured": true, 00:25:41.752 "data_offset": 2048, 00:25:41.752 "data_size": 63488 00:25:41.752 }, 00:25:41.752 { 00:25:41.752 "name": null, 00:25:41.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.752 "is_configured": false, 00:25:41.752 "data_offset": 2048, 00:25:41.752 "data_size": 63488 00:25:41.752 }, 00:25:41.752 { 00:25:41.752 "name": "BaseBdev3", 00:25:41.752 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:41.752 "is_configured": true, 00:25:41.752 "data_offset": 2048, 00:25:41.752 "data_size": 63488 00:25:41.752 }, 00:25:41.752 { 00:25:41.752 "name": "BaseBdev4", 00:25:41.752 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:41.752 "is_configured": true, 00:25:41.752 "data_offset": 2048, 00:25:41.752 "data_size": 63488 00:25:41.752 } 00:25:41.752 ] 00:25:41.752 }' 00:25:41.752 13:09:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.752 13:09:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:41.752 13:09:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@660 -- # break 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.010 13:09:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:42.268 "name": "raid_bdev1", 00:25:42.268 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:42.268 "strip_size_kb": 0, 00:25:42.268 "state": "online", 00:25:42.268 "raid_level": "raid1", 00:25:42.268 "superblock": true, 00:25:42.268 "num_base_bdevs": 4, 00:25:42.268 "num_base_bdevs_discovered": 3, 00:25:42.268 "num_base_bdevs_operational": 3, 00:25:42.268 "base_bdevs_list": [ 00:25:42.268 { 00:25:42.268 "name": "spare", 00:25:42.268 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:42.268 "is_configured": true, 00:25:42.268 "data_offset": 2048, 00:25:42.268 "data_size": 63488 00:25:42.268 }, 00:25:42.268 { 00:25:42.268 "name": null, 00:25:42.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.268 "is_configured": false, 00:25:42.268 "data_offset": 2048, 00:25:42.268 "data_size": 63488 00:25:42.268 }, 00:25:42.268 { 00:25:42.268 "name": "BaseBdev3", 00:25:42.268 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:42.268 "is_configured": true, 00:25:42.268 "data_offset": 2048, 00:25:42.268 "data_size": 63488 00:25:42.268 }, 00:25:42.268 { 00:25:42.268 "name": "BaseBdev4", 00:25:42.268 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:42.268 "is_configured": true, 00:25:42.268 "data_offset": 2048, 00:25:42.268 "data_size": 63488 00:25:42.268 } 00:25:42.268 ] 00:25:42.268 }' 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.268 13:09:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.835 13:09:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:42.835 "name": "raid_bdev1", 00:25:42.835 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:42.835 "strip_size_kb": 0, 00:25:42.835 "state": "online", 00:25:42.835 "raid_level": "raid1", 00:25:42.835 "superblock": true, 00:25:42.835 "num_base_bdevs": 4, 00:25:42.835 "num_base_bdevs_discovered": 3, 00:25:42.835 "num_base_bdevs_operational": 3, 00:25:42.836 "base_bdevs_list": [ 00:25:42.836 { 00:25:42.836 "name": "spare", 00:25:42.836 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:42.836 "is_configured": true, 00:25:42.836 "data_offset": 2048, 00:25:42.836 "data_size": 63488 00:25:42.836 }, 00:25:42.836 { 00:25:42.836 "name": null, 00:25:42.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.836 "is_configured": false, 00:25:42.836 "data_offset": 2048, 00:25:42.836 "data_size": 63488 00:25:42.836 }, 00:25:42.836 { 00:25:42.836 "name": "BaseBdev3", 00:25:42.836 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:42.836 "is_configured": true, 00:25:42.836 "data_offset": 2048, 00:25:42.836 "data_size": 63488 00:25:42.836 }, 00:25:42.836 { 00:25:42.836 "name": "BaseBdev4", 00:25:42.836 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:42.836 "is_configured": true, 00:25:42.836 "data_offset": 2048, 00:25:42.836 "data_size": 63488 00:25:42.836 } 00:25:42.836 ] 00:25:42.836 }' 00:25:42.836 13:09:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:42.836 13:09:46 -- common/autotest_common.sh@10 -- # set +x 00:25:43.403 13:09:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:43.662 [2024-04-17 13:09:47.692906] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.662 [2024-04-17 13:09:47.692957] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.662 00:25:43.662 Latency(us) 00:25:43.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.662 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:43.662 raid_bdev1 : 12.63 92.47 277.41 0.00 0.00 15299.83 396.57 124875.87 00:25:43.662 =================================================================================================================== 00:25:43.662 Total : 92.47 277.41 0.00 0.00 15299.83 396.57 124875.87 00:25:43.662 [2024-04-17 13:09:47.780212] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.662 [2024-04-17 13:09:47.780274] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.662 [2024-04-17 13:09:47.780389] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.662 [2024-04-17 13:09:47.780402] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:43.662 0 00:25:43.662 13:09:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:43.662 13:09:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.921 13:09:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:43.921 13:09:48 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:43.921 13:09:48 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@12 -- # local i 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:43.921 13:09:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:44.179 /dev/nbd0 00:25:44.462 13:09:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:44.462 13:09:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:44.462 13:09:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:44.462 13:09:48 -- common/autotest_common.sh@855 -- # local i 00:25:44.462 13:09:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:44.462 13:09:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:44.462 13:09:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:44.462 13:09:48 -- common/autotest_common.sh@859 -- # break 00:25:44.462 13:09:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:44.462 13:09:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:44.462 13:09:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:44.462 1+0 records in 00:25:44.462 1+0 records out 00:25:44.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374035 s, 11.0 MB/s 00:25:44.462 13:09:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.462 13:09:48 -- common/autotest_common.sh@872 -- # size=4096 00:25:44.463 13:09:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.463 13:09:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:44.463 13:09:48 -- common/autotest_common.sh@875 -- # return 0 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@678 -- # continue 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:44.463 13:09:48 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@12 -- # local i 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:44.463 13:09:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:44.733 /dev/nbd1 00:25:44.733 13:09:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:44.733 13:09:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:44.733 13:09:48 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:44.733 13:09:48 -- common/autotest_common.sh@855 -- # local i 00:25:44.734 13:09:48 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:44.734 13:09:48 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:44.734 13:09:48 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:44.734 13:09:48 -- common/autotest_common.sh@859 -- # break 00:25:44.734 13:09:48 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:44.734 13:09:48 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:44.734 13:09:48 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:44.734 1+0 records in 00:25:44.734 1+0 records out 00:25:44.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367809 s, 11.1 MB/s 00:25:44.734 13:09:48 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.734 13:09:48 -- common/autotest_common.sh@872 -- # size=4096 00:25:44.734 13:09:48 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:44.734 13:09:48 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:44.734 13:09:48 -- common/autotest_common.sh@875 -- # return 0 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:44.734 13:09:48 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:44.734 13:09:48 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@51 -- # local i 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:44.734 13:09:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:45.004 13:09:49 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@41 -- # break 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@45 -- # return 0 00:25:45.263 13:09:49 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:45.263 13:09:49 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:45.263 13:09:49 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@12 -- # local i 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.263 13:09:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:45.522 /dev/nbd1 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:45.522 13:09:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:45.522 13:09:49 -- common/autotest_common.sh@855 -- # local i 00:25:45.522 13:09:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:45.522 13:09:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:45.522 13:09:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:45.522 13:09:49 -- common/autotest_common.sh@859 -- # break 00:25:45.522 13:09:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:45.522 13:09:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:45.522 13:09:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:45.522 1+0 records in 00:25:45.522 1+0 records out 00:25:45.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333691 s, 12.3 MB/s 00:25:45.522 13:09:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.522 13:09:49 -- common/autotest_common.sh@872 -- # size=4096 00:25:45.522 13:09:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:45.522 13:09:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:45.522 13:09:49 -- common/autotest_common.sh@875 -- # return 0 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:45.522 13:09:49 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:45.522 13:09:49 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@51 -- # local i 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:45.522 13:09:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@41 -- # break 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@45 -- # return 0 00:25:45.780 13:09:49 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@51 -- # local i 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:45.780 13:09:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@41 -- # break 00:25:46.347 13:09:50 -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.347 13:09:50 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:46.347 13:09:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:46.347 13:09:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:46.347 13:09:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:46.605 13:09:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:46.863 [2024-04-17 13:09:50.759151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:46.863 [2024-04-17 13:09:50.759271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.863 [2024-04-17 13:09:50.759317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:46.863 [2024-04-17 13:09:50.759348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.863 [2024-04-17 13:09:50.761936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.863 [2024-04-17 13:09:50.762015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:46.863 [2024-04-17 13:09:50.762146] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:46.863 [2024-04-17 13:09:50.762212] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:46.863 BaseBdev1 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@696 -- # continue 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:46.863 13:09:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:47.121 13:09:51 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:47.380 [2024-04-17 13:09:51.267314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:47.380 [2024-04-17 13:09:51.267410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.380 [2024-04-17 13:09:51.267467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:47.380 [2024-04-17 13:09:51.267495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.380 [2024-04-17 13:09:51.268053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.380 [2024-04-17 13:09:51.268120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:47.380 [2024-04-17 13:09:51.268233] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:47.380 [2024-04-17 13:09:51.268249] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:25:47.380 [2024-04-17 13:09:51.268257] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.380 [2024-04-17 13:09:51.268282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:25:47.380 [2024-04-17 13:09:51.268368] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.380 BaseBdev3 00:25:47.380 13:09:51 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:47.380 13:09:51 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:47.380 13:09:51 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:47.639 13:09:51 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:47.639 [2024-04-17 13:09:51.763462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:47.639 [2024-04-17 13:09:51.763568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.639 [2024-04-17 13:09:51.763608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:25:47.639 [2024-04-17 13:09:51.763640] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.639 [2024-04-17 13:09:51.764181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.639 [2024-04-17 13:09:51.764246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:47.639 [2024-04-17 13:09:51.764358] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:47.639 [2024-04-17 13:09:51.764388] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:47.639 BaseBdev4 00:25:47.639 13:09:51 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:47.898 13:09:51 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:48.157 [2024-04-17 13:09:52.251682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.157 [2024-04-17 13:09:52.251784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.157 [2024-04-17 13:09:52.251838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:48.157 [2024-04-17 13:09:52.251869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.157 [2024-04-17 13:09:52.252421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.157 [2024-04-17 13:09:52.252500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.157 [2024-04-17 13:09:52.252629] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:48.157 [2024-04-17 13:09:52.252666] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.157 spare 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.157 13:09:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.416 [2024-04-17 13:09:52.352794] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:25:48.417 [2024-04-17 13:09:52.352834] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.417 [2024-04-17 13:09:52.353013] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a3c0 00:25:48.417 [2024-04-17 13:09:52.353488] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:25:48.417 [2024-04-17 13:09:52.353514] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:25:48.417 [2024-04-17 13:09:52.353698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.417 13:09:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:48.417 "name": "raid_bdev1", 00:25:48.417 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:48.417 "strip_size_kb": 0, 00:25:48.417 "state": "online", 00:25:48.417 "raid_level": "raid1", 00:25:48.417 "superblock": true, 00:25:48.417 "num_base_bdevs": 4, 00:25:48.417 "num_base_bdevs_discovered": 3, 00:25:48.417 "num_base_bdevs_operational": 3, 00:25:48.417 "base_bdevs_list": [ 00:25:48.417 { 00:25:48.417 "name": "spare", 00:25:48.417 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:48.417 "is_configured": true, 00:25:48.417 "data_offset": 2048, 00:25:48.417 "data_size": 63488 00:25:48.417 }, 00:25:48.417 { 00:25:48.417 "name": null, 00:25:48.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.417 "is_configured": false, 00:25:48.417 "data_offset": 2048, 00:25:48.417 "data_size": 63488 00:25:48.417 }, 00:25:48.417 { 00:25:48.417 "name": "BaseBdev3", 00:25:48.417 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:48.417 "is_configured": true, 00:25:48.417 "data_offset": 2048, 00:25:48.417 "data_size": 63488 00:25:48.417 }, 00:25:48.417 { 00:25:48.417 "name": "BaseBdev4", 00:25:48.417 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:48.417 "is_configured": true, 00:25:48.417 "data_offset": 2048, 00:25:48.417 "data_size": 63488 00:25:48.417 } 00:25:48.417 ] 00:25:48.417 }' 00:25:48.417 13:09:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:48.417 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:49.352 "name": "raid_bdev1", 00:25:49.352 "uuid": "64eccf8a-5dbf-471c-998b-1c0d719a00e4", 00:25:49.352 "strip_size_kb": 0, 00:25:49.352 "state": "online", 00:25:49.352 "raid_level": "raid1", 00:25:49.352 "superblock": true, 00:25:49.352 "num_base_bdevs": 4, 00:25:49.352 "num_base_bdevs_discovered": 3, 00:25:49.352 "num_base_bdevs_operational": 3, 00:25:49.352 "base_bdevs_list": [ 00:25:49.352 { 00:25:49.352 "name": "spare", 00:25:49.352 "uuid": "64e3113d-b93f-566d-9509-264737e368fc", 00:25:49.352 "is_configured": true, 00:25:49.352 "data_offset": 2048, 00:25:49.352 "data_size": 63488 00:25:49.352 }, 00:25:49.352 { 00:25:49.352 "name": null, 00:25:49.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.352 "is_configured": false, 00:25:49.352 "data_offset": 2048, 00:25:49.352 "data_size": 63488 00:25:49.352 }, 00:25:49.352 { 00:25:49.352 "name": "BaseBdev3", 00:25:49.352 "uuid": "bbe427b8-ea4f-586f-922e-97d44c0197a0", 00:25:49.352 "is_configured": true, 00:25:49.352 "data_offset": 2048, 00:25:49.352 "data_size": 63488 00:25:49.352 }, 00:25:49.352 { 00:25:49.352 "name": "BaseBdev4", 00:25:49.352 "uuid": "29609673-d6a9-5bef-bd15-48d315dc39f9", 00:25:49.352 "is_configured": true, 00:25:49.352 "data_offset": 2048, 00:25:49.352 "data_size": 63488 00:25:49.352 } 00:25:49.352 ] 00:25:49.352 }' 00:25:49.352 13:09:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:49.610 13:09:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:49.610 13:09:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:49.610 13:09:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:49.610 13:09:53 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.610 13:09:53 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:49.868 13:09:53 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.868 13:09:53 -- bdev/bdev_raid.sh@709 -- # killprocess 134951 00:25:49.868 13:09:53 -- common/autotest_common.sh@924 -- # '[' -z 134951 ']' 00:25:49.868 13:09:53 -- common/autotest_common.sh@928 -- # kill -0 134951 00:25:49.868 13:09:53 -- common/autotest_common.sh@929 -- # uname 00:25:49.868 13:09:53 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:49.868 13:09:53 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 134951 00:25:49.868 killing process with pid 134951 00:25:49.868 Received shutdown signal, test time was about 18.756977 seconds 00:25:49.868 00:25:49.868 Latency(us) 00:25:49.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.868 =================================================================================================================== 00:25:49.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.868 13:09:53 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:49.868 13:09:53 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:49.868 13:09:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 134951' 00:25:49.868 13:09:53 -- common/autotest_common.sh@943 -- # kill 134951 00:25:49.868 13:09:53 -- common/autotest_common.sh@948 -- # wait 134951 00:25:49.868 [2024-04-17 13:09:53.886707] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:49.868 [2024-04-17 13:09:53.886810] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.868 [2024-04-17 13:09:53.886912] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.868 [2024-04-17 13:09:53.886937] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:25:50.127 [2024-04-17 13:09:54.247355] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:51.503 ************************************ 00:25:51.503 END TEST raid_rebuild_test_sb_io 00:25:51.503 ************************************ 00:25:51.503 13:09:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:51.503 00:25:51.503 real 0m26.376s 00:25:51.503 user 0m43.129s 00:25:51.503 sys 0m3.141s 00:25:51.503 13:09:55 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:25:51.503 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:25:51.503 13:09:55 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:25:51.503 13:09:55 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:51.503 13:09:55 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:51.503 13:09:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:25:51.503 13:09:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:51.503 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:25:51.503 ************************************ 00:25:51.503 START TEST raid5f_state_function_test 00:25:51.503 ************************************ 00:25:51.504 13:09:55 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid5f 3 false 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=135629 00:25:51.504 Process raid pid: 135629 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135629' 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135629 /var/tmp/spdk-raid.sock 00:25:51.504 13:09:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:51.504 13:09:55 -- common/autotest_common.sh@817 -- # '[' -z 135629 ']' 00:25:51.504 13:09:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:51.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:51.504 13:09:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:51.504 13:09:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:51.504 13:09:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:51.504 13:09:55 -- common/autotest_common.sh@10 -- # set +x 00:25:51.504 [2024-04-17 13:09:55.573334] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:25:51.504 [2024-04-17 13:09:55.573753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.763 [2024-04-17 13:09:55.741624] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.022 [2024-04-17 13:09:55.969049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.279 [2024-04-17 13:09:56.175042] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:52.541 13:09:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.541 13:09:56 -- common/autotest_common.sh@850 -- # return 0 00:25:52.541 13:09:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:52.801 [2024-04-17 13:09:56.848556] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:52.801 [2024-04-17 13:09:56.848660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:52.801 [2024-04-17 13:09:56.848675] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:52.801 [2024-04-17 13:09:56.848695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:52.801 [2024-04-17 13:09:56.848703] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:52.802 [2024-04-17 13:09:56.848745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.802 13:09:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.060 13:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.060 "name": "Existed_Raid", 00:25:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.060 "strip_size_kb": 64, 00:25:53.060 "state": "configuring", 00:25:53.060 "raid_level": "raid5f", 00:25:53.060 "superblock": false, 00:25:53.060 "num_base_bdevs": 3, 00:25:53.060 "num_base_bdevs_discovered": 0, 00:25:53.060 "num_base_bdevs_operational": 3, 00:25:53.060 "base_bdevs_list": [ 00:25:53.060 { 00:25:53.060 "name": "BaseBdev1", 00:25:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.060 "is_configured": false, 00:25:53.060 "data_offset": 0, 00:25:53.060 "data_size": 0 00:25:53.060 }, 00:25:53.060 { 00:25:53.060 "name": "BaseBdev2", 00:25:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.061 "is_configured": false, 00:25:53.061 "data_offset": 0, 00:25:53.061 "data_size": 0 00:25:53.061 }, 00:25:53.061 { 00:25:53.061 "name": "BaseBdev3", 00:25:53.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.061 "is_configured": false, 00:25:53.061 "data_offset": 0, 00:25:53.061 "data_size": 0 00:25:53.061 } 00:25:53.061 ] 00:25:53.061 }' 00:25:53.061 13:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.061 13:09:57 -- common/autotest_common.sh@10 -- # set +x 00:25:53.664 13:09:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:53.923 [2024-04-17 13:09:58.020683] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:53.923 [2024-04-17 13:09:58.020766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:53.923 13:09:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:54.181 [2024-04-17 13:09:58.292786] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:54.181 [2024-04-17 13:09:58.292881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:54.181 [2024-04-17 13:09:58.292895] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:54.181 [2024-04-17 13:09:58.292923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:54.181 [2024-04-17 13:09:58.292932] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:54.181 [2024-04-17 13:09:58.292959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:54.181 13:09:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:54.749 [2024-04-17 13:09:58.604217] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.749 BaseBdev1 00:25:54.749 13:09:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:54.749 13:09:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:54.749 13:09:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:54.749 13:09:58 -- common/autotest_common.sh@887 -- # local i 00:25:54.749 13:09:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:54.749 13:09:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:54.749 13:09:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.749 13:09:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.007 [ 00:25:55.007 { 00:25:55.007 "name": "BaseBdev1", 00:25:55.007 "aliases": [ 00:25:55.007 "d90f8086-6c6c-4114-899d-54008b7e9378" 00:25:55.007 ], 00:25:55.007 "product_name": "Malloc disk", 00:25:55.007 "block_size": 512, 00:25:55.007 "num_blocks": 65536, 00:25:55.007 "uuid": "d90f8086-6c6c-4114-899d-54008b7e9378", 00:25:55.007 "assigned_rate_limits": { 00:25:55.007 "rw_ios_per_sec": 0, 00:25:55.007 "rw_mbytes_per_sec": 0, 00:25:55.007 "r_mbytes_per_sec": 0, 00:25:55.007 "w_mbytes_per_sec": 0 00:25:55.007 }, 00:25:55.007 "claimed": true, 00:25:55.007 "claim_type": "exclusive_write", 00:25:55.007 "zoned": false, 00:25:55.007 "supported_io_types": { 00:25:55.007 "read": true, 00:25:55.007 "write": true, 00:25:55.007 "unmap": true, 00:25:55.007 "write_zeroes": true, 00:25:55.007 "flush": true, 00:25:55.007 "reset": true, 00:25:55.007 "compare": false, 00:25:55.007 "compare_and_write": false, 00:25:55.007 "abort": true, 00:25:55.007 "nvme_admin": false, 00:25:55.007 "nvme_io": false 00:25:55.007 }, 00:25:55.007 "memory_domains": [ 00:25:55.007 { 00:25:55.007 "dma_device_id": "system", 00:25:55.007 "dma_device_type": 1 00:25:55.007 }, 00:25:55.007 { 00:25:55.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.007 "dma_device_type": 2 00:25:55.007 } 00:25:55.007 ], 00:25:55.007 "driver_specific": {} 00:25:55.007 } 00:25:55.007 ] 00:25:55.007 13:09:59 -- common/autotest_common.sh@893 -- # return 0 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.007 13:09:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.265 13:09:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.265 "name": "Existed_Raid", 00:25:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.265 "strip_size_kb": 64, 00:25:55.265 "state": "configuring", 00:25:55.265 "raid_level": "raid5f", 00:25:55.265 "superblock": false, 00:25:55.265 "num_base_bdevs": 3, 00:25:55.265 "num_base_bdevs_discovered": 1, 00:25:55.265 "num_base_bdevs_operational": 3, 00:25:55.265 "base_bdevs_list": [ 00:25:55.265 { 00:25:55.265 "name": "BaseBdev1", 00:25:55.265 "uuid": "d90f8086-6c6c-4114-899d-54008b7e9378", 00:25:55.265 "is_configured": true, 00:25:55.265 "data_offset": 0, 00:25:55.265 "data_size": 65536 00:25:55.265 }, 00:25:55.265 { 00:25:55.265 "name": "BaseBdev2", 00:25:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.265 "is_configured": false, 00:25:55.265 "data_offset": 0, 00:25:55.265 "data_size": 0 00:25:55.265 }, 00:25:55.265 { 00:25:55.265 "name": "BaseBdev3", 00:25:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.265 "is_configured": false, 00:25:55.265 "data_offset": 0, 00:25:55.265 "data_size": 0 00:25:55.265 } 00:25:55.265 ] 00:25:55.265 }' 00:25:55.265 13:09:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.265 13:09:59 -- common/autotest_common.sh@10 -- # set +x 00:25:56.201 13:10:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:56.201 [2024-04-17 13:10:00.300682] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:56.201 [2024-04-17 13:10:00.300765] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:56.201 13:10:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:56.201 13:10:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:56.459 [2024-04-17 13:10:00.572809] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:56.459 [2024-04-17 13:10:00.574957] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:56.459 [2024-04-17 13:10:00.575031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:56.459 [2024-04-17 13:10:00.575044] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:56.459 [2024-04-17 13:10:00.575072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:56.459 13:10:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:56.459 13:10:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.460 13:10:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.058 13:10:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.058 "name": "Existed_Raid", 00:25:57.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.058 "strip_size_kb": 64, 00:25:57.058 "state": "configuring", 00:25:57.058 "raid_level": "raid5f", 00:25:57.058 "superblock": false, 00:25:57.058 "num_base_bdevs": 3, 00:25:57.058 "num_base_bdevs_discovered": 1, 00:25:57.058 "num_base_bdevs_operational": 3, 00:25:57.058 "base_bdevs_list": [ 00:25:57.058 { 00:25:57.058 "name": "BaseBdev1", 00:25:57.058 "uuid": "d90f8086-6c6c-4114-899d-54008b7e9378", 00:25:57.058 "is_configured": true, 00:25:57.058 "data_offset": 0, 00:25:57.058 "data_size": 65536 00:25:57.058 }, 00:25:57.058 { 00:25:57.058 "name": "BaseBdev2", 00:25:57.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.058 "is_configured": false, 00:25:57.058 "data_offset": 0, 00:25:57.058 "data_size": 0 00:25:57.058 }, 00:25:57.058 { 00:25:57.058 "name": "BaseBdev3", 00:25:57.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.058 "is_configured": false, 00:25:57.058 "data_offset": 0, 00:25:57.058 "data_size": 0 00:25:57.058 } 00:25:57.058 ] 00:25:57.058 }' 00:25:57.058 13:10:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.058 13:10:00 -- common/autotest_common.sh@10 -- # set +x 00:25:57.624 13:10:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:57.884 [2024-04-17 13:10:01.932854] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.884 BaseBdev2 00:25:57.884 13:10:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:57.884 13:10:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:57.884 13:10:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:57.884 13:10:01 -- common/autotest_common.sh@887 -- # local i 00:25:57.884 13:10:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:57.884 13:10:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:57.884 13:10:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:58.143 13:10:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:58.402 [ 00:25:58.402 { 00:25:58.402 "name": "BaseBdev2", 00:25:58.402 "aliases": [ 00:25:58.402 "fcabeb66-bbbb-48f0-8805-103b6ecf9ac3" 00:25:58.402 ], 00:25:58.402 "product_name": "Malloc disk", 00:25:58.402 "block_size": 512, 00:25:58.402 "num_blocks": 65536, 00:25:58.402 "uuid": "fcabeb66-bbbb-48f0-8805-103b6ecf9ac3", 00:25:58.402 "assigned_rate_limits": { 00:25:58.402 "rw_ios_per_sec": 0, 00:25:58.402 "rw_mbytes_per_sec": 0, 00:25:58.402 "r_mbytes_per_sec": 0, 00:25:58.402 "w_mbytes_per_sec": 0 00:25:58.402 }, 00:25:58.402 "claimed": true, 00:25:58.402 "claim_type": "exclusive_write", 00:25:58.402 "zoned": false, 00:25:58.402 "supported_io_types": { 00:25:58.402 "read": true, 00:25:58.402 "write": true, 00:25:58.402 "unmap": true, 00:25:58.402 "write_zeroes": true, 00:25:58.402 "flush": true, 00:25:58.402 "reset": true, 00:25:58.402 "compare": false, 00:25:58.402 "compare_and_write": false, 00:25:58.402 "abort": true, 00:25:58.402 "nvme_admin": false, 00:25:58.402 "nvme_io": false 00:25:58.402 }, 00:25:58.402 "memory_domains": [ 00:25:58.402 { 00:25:58.402 "dma_device_id": "system", 00:25:58.402 "dma_device_type": 1 00:25:58.402 }, 00:25:58.402 { 00:25:58.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.402 "dma_device_type": 2 00:25:58.402 } 00:25:58.402 ], 00:25:58.402 "driver_specific": {} 00:25:58.402 } 00:25:58.402 ] 00:25:58.402 13:10:02 -- common/autotest_common.sh@893 -- # return 0 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.402 13:10:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.662 13:10:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.662 "name": "Existed_Raid", 00:25:58.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.662 "strip_size_kb": 64, 00:25:58.662 "state": "configuring", 00:25:58.662 "raid_level": "raid5f", 00:25:58.662 "superblock": false, 00:25:58.662 "num_base_bdevs": 3, 00:25:58.662 "num_base_bdevs_discovered": 2, 00:25:58.662 "num_base_bdevs_operational": 3, 00:25:58.662 "base_bdevs_list": [ 00:25:58.662 { 00:25:58.662 "name": "BaseBdev1", 00:25:58.662 "uuid": "d90f8086-6c6c-4114-899d-54008b7e9378", 00:25:58.662 "is_configured": true, 00:25:58.662 "data_offset": 0, 00:25:58.662 "data_size": 65536 00:25:58.662 }, 00:25:58.662 { 00:25:58.662 "name": "BaseBdev2", 00:25:58.662 "uuid": "fcabeb66-bbbb-48f0-8805-103b6ecf9ac3", 00:25:58.662 "is_configured": true, 00:25:58.662 "data_offset": 0, 00:25:58.662 "data_size": 65536 00:25:58.662 }, 00:25:58.662 { 00:25:58.662 "name": "BaseBdev3", 00:25:58.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.662 "is_configured": false, 00:25:58.662 "data_offset": 0, 00:25:58.662 "data_size": 0 00:25:58.662 } 00:25:58.662 ] 00:25:58.662 }' 00:25:58.662 13:10:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.662 13:10:02 -- common/autotest_common.sh@10 -- # set +x 00:25:59.597 13:10:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:59.597 [2024-04-17 13:10:03.723214] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:59.597 [2024-04-17 13:10:03.723295] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:59.597 [2024-04-17 13:10:03.723307] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:59.597 [2024-04-17 13:10:03.723419] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:25:59.597 [2024-04-17 13:10:03.728817] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:59.597 [2024-04-17 13:10:03.728848] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:59.597 [2024-04-17 13:10:03.729125] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.597 BaseBdev3 00:25:59.597 13:10:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:59.597 13:10:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:25:59.597 13:10:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:59.597 13:10:03 -- common/autotest_common.sh@887 -- # local i 00:25:59.597 13:10:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:59.597 13:10:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:59.597 13:10:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:59.855 13:10:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:00.113 [ 00:26:00.113 { 00:26:00.113 "name": "BaseBdev3", 00:26:00.113 "aliases": [ 00:26:00.113 "ab3064a7-ceed-4de5-bd87-7b3b5b46453c" 00:26:00.113 ], 00:26:00.113 "product_name": "Malloc disk", 00:26:00.113 "block_size": 512, 00:26:00.113 "num_blocks": 65536, 00:26:00.113 "uuid": "ab3064a7-ceed-4de5-bd87-7b3b5b46453c", 00:26:00.113 "assigned_rate_limits": { 00:26:00.113 "rw_ios_per_sec": 0, 00:26:00.113 "rw_mbytes_per_sec": 0, 00:26:00.113 "r_mbytes_per_sec": 0, 00:26:00.113 "w_mbytes_per_sec": 0 00:26:00.113 }, 00:26:00.113 "claimed": true, 00:26:00.113 "claim_type": "exclusive_write", 00:26:00.113 "zoned": false, 00:26:00.113 "supported_io_types": { 00:26:00.113 "read": true, 00:26:00.113 "write": true, 00:26:00.113 "unmap": true, 00:26:00.113 "write_zeroes": true, 00:26:00.113 "flush": true, 00:26:00.113 "reset": true, 00:26:00.113 "compare": false, 00:26:00.113 "compare_and_write": false, 00:26:00.113 "abort": true, 00:26:00.113 "nvme_admin": false, 00:26:00.113 "nvme_io": false 00:26:00.113 }, 00:26:00.113 "memory_domains": [ 00:26:00.113 { 00:26:00.113 "dma_device_id": "system", 00:26:00.113 "dma_device_type": 1 00:26:00.113 }, 00:26:00.113 { 00:26:00.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.113 "dma_device_type": 2 00:26:00.113 } 00:26:00.113 ], 00:26:00.113 "driver_specific": {} 00:26:00.113 } 00:26:00.113 ] 00:26:00.113 13:10:04 -- common/autotest_common.sh@893 -- # return 0 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.113 13:10:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.378 13:10:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.378 "name": "Existed_Raid", 00:26:00.378 "uuid": "e7f0e241-face-466f-9782-1445ca82ffec", 00:26:00.378 "strip_size_kb": 64, 00:26:00.378 "state": "online", 00:26:00.378 "raid_level": "raid5f", 00:26:00.378 "superblock": false, 00:26:00.378 "num_base_bdevs": 3, 00:26:00.378 "num_base_bdevs_discovered": 3, 00:26:00.378 "num_base_bdevs_operational": 3, 00:26:00.378 "base_bdevs_list": [ 00:26:00.378 { 00:26:00.378 "name": "BaseBdev1", 00:26:00.378 "uuid": "d90f8086-6c6c-4114-899d-54008b7e9378", 00:26:00.378 "is_configured": true, 00:26:00.378 "data_offset": 0, 00:26:00.378 "data_size": 65536 00:26:00.378 }, 00:26:00.378 { 00:26:00.378 "name": "BaseBdev2", 00:26:00.378 "uuid": "fcabeb66-bbbb-48f0-8805-103b6ecf9ac3", 00:26:00.378 "is_configured": true, 00:26:00.378 "data_offset": 0, 00:26:00.378 "data_size": 65536 00:26:00.378 }, 00:26:00.378 { 00:26:00.378 "name": "BaseBdev3", 00:26:00.378 "uuid": "ab3064a7-ceed-4de5-bd87-7b3b5b46453c", 00:26:00.378 "is_configured": true, 00:26:00.378 "data_offset": 0, 00:26:00.378 "data_size": 65536 00:26:00.378 } 00:26:00.378 ] 00:26:00.378 }' 00:26:00.378 13:10:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.378 13:10:04 -- common/autotest_common.sh@10 -- # set +x 00:26:01.313 13:10:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:01.572 [2024-04-17 13:10:05.467023] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.572 13:10:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.830 13:10:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:01.830 "name": "Existed_Raid", 00:26:01.830 "uuid": "e7f0e241-face-466f-9782-1445ca82ffec", 00:26:01.830 "strip_size_kb": 64, 00:26:01.830 "state": "online", 00:26:01.830 "raid_level": "raid5f", 00:26:01.830 "superblock": false, 00:26:01.830 "num_base_bdevs": 3, 00:26:01.830 "num_base_bdevs_discovered": 2, 00:26:01.830 "num_base_bdevs_operational": 2, 00:26:01.830 "base_bdevs_list": [ 00:26:01.830 { 00:26:01.830 "name": null, 00:26:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.830 "is_configured": false, 00:26:01.830 "data_offset": 0, 00:26:01.830 "data_size": 65536 00:26:01.830 }, 00:26:01.830 { 00:26:01.830 "name": "BaseBdev2", 00:26:01.830 "uuid": "fcabeb66-bbbb-48f0-8805-103b6ecf9ac3", 00:26:01.830 "is_configured": true, 00:26:01.830 "data_offset": 0, 00:26:01.830 "data_size": 65536 00:26:01.830 }, 00:26:01.830 { 00:26:01.830 "name": "BaseBdev3", 00:26:01.830 "uuid": "ab3064a7-ceed-4de5-bd87-7b3b5b46453c", 00:26:01.830 "is_configured": true, 00:26:01.830 "data_offset": 0, 00:26:01.830 "data_size": 65536 00:26:01.830 } 00:26:01.830 ] 00:26:01.830 }' 00:26:01.830 13:10:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:01.830 13:10:05 -- common/autotest_common.sh@10 -- # set +x 00:26:02.397 13:10:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:02.398 13:10:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:02.398 13:10:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:02.398 13:10:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.656 13:10:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:02.656 13:10:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:02.656 13:10:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:02.914 [2024-04-17 13:10:06.979007] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:02.914 [2024-04-17 13:10:06.979132] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.173 [2024-04-17 13:10:07.063209] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.173 13:10:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:03.173 13:10:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:03.173 13:10:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.173 13:10:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:03.431 13:10:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:03.431 13:10:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:03.431 13:10:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:03.689 [2024-04-17 13:10:07.663549] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:03.689 [2024-04-17 13:10:07.663635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:26:03.689 13:10:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:03.689 13:10:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:03.689 13:10:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.689 13:10:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:03.947 13:10:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:03.947 13:10:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:03.947 13:10:08 -- bdev/bdev_raid.sh@287 -- # killprocess 135629 00:26:03.948 13:10:08 -- common/autotest_common.sh@924 -- # '[' -z 135629 ']' 00:26:03.948 13:10:08 -- common/autotest_common.sh@928 -- # kill -0 135629 00:26:03.948 13:10:08 -- common/autotest_common.sh@929 -- # uname 00:26:03.948 13:10:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:03.948 13:10:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 135629 00:26:03.948 killing process with pid 135629 00:26:03.948 13:10:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:03.948 13:10:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:03.948 13:10:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 135629' 00:26:03.948 13:10:08 -- common/autotest_common.sh@943 -- # kill 135629 00:26:03.948 13:10:08 -- common/autotest_common.sh@948 -- # wait 135629 00:26:03.948 [2024-04-17 13:10:08.075318] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.948 [2024-04-17 13:10:08.075433] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:05.324 ************************************ 00:26:05.324 END TEST raid5f_state_function_test 00:26:05.324 ************************************ 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:05.324 00:26:05.324 real 0m13.713s 00:26:05.324 user 0m24.516s 00:26:05.324 sys 0m1.427s 00:26:05.324 13:10:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:26:05.324 13:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:26:05.324 13:10:09 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:26:05.324 13:10:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:05.324 13:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.324 ************************************ 00:26:05.324 START TEST raid5f_state_function_test_sb 00:26:05.324 ************************************ 00:26:05.324 13:10:09 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid5f 3 true 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:05.324 13:10:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=136059 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:05.325 Process raid pid: 136059 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 136059' 00:26:05.325 13:10:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 136059 /var/tmp/spdk-raid.sock 00:26:05.325 13:10:09 -- common/autotest_common.sh@817 -- # '[' -z 136059 ']' 00:26:05.325 13:10:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:05.325 13:10:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.325 13:10:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:05.325 13:10:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.325 13:10:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.325 [2024-04-17 13:10:09.363941] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:26:05.325 [2024-04-17 13:10:09.364176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.583 [2024-04-17 13:10:09.521179] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.842 [2024-04-17 13:10:09.791627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.100 [2024-04-17 13:10:10.002357] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.359 13:10:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:06.359 13:10:10 -- common/autotest_common.sh@850 -- # return 0 00:26:06.360 13:10:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:06.621 [2024-04-17 13:10:10.598016] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:06.621 [2024-04-17 13:10:10.598119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:06.621 [2024-04-17 13:10:10.598134] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:06.621 [2024-04-17 13:10:10.598155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:06.621 [2024-04-17 13:10:10.598163] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:06.621 [2024-04-17 13:10:10.598205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.621 13:10:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.929 13:10:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:06.929 "name": "Existed_Raid", 00:26:06.929 "uuid": "8dc137fa-fdcb-45ad-839b-1dc6431baecb", 00:26:06.929 "strip_size_kb": 64, 00:26:06.929 "state": "configuring", 00:26:06.929 "raid_level": "raid5f", 00:26:06.929 "superblock": true, 00:26:06.929 "num_base_bdevs": 3, 00:26:06.929 "num_base_bdevs_discovered": 0, 00:26:06.929 "num_base_bdevs_operational": 3, 00:26:06.929 "base_bdevs_list": [ 00:26:06.929 { 00:26:06.929 "name": "BaseBdev1", 00:26:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.929 "is_configured": false, 00:26:06.929 "data_offset": 0, 00:26:06.929 "data_size": 0 00:26:06.929 }, 00:26:06.929 { 00:26:06.929 "name": "BaseBdev2", 00:26:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.929 "is_configured": false, 00:26:06.929 "data_offset": 0, 00:26:06.929 "data_size": 0 00:26:06.929 }, 00:26:06.929 { 00:26:06.929 "name": "BaseBdev3", 00:26:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.929 "is_configured": false, 00:26:06.929 "data_offset": 0, 00:26:06.929 "data_size": 0 00:26:06.929 } 00:26:06.929 ] 00:26:06.929 }' 00:26:06.929 13:10:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:06.929 13:10:10 -- common/autotest_common.sh@10 -- # set +x 00:26:07.523 13:10:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:07.781 [2024-04-17 13:10:11.922260] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:07.782 [2024-04-17 13:10:11.922333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:08.040 13:10:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:08.040 [2024-04-17 13:10:12.182352] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:08.040 [2024-04-17 13:10:12.182473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:08.040 [2024-04-17 13:10:12.182487] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:08.040 [2024-04-17 13:10:12.182514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:08.040 [2024-04-17 13:10:12.182523] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:08.040 [2024-04-17 13:10:12.182551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:08.299 13:10:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:08.558 [2024-04-17 13:10:12.477659] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.558 BaseBdev1 00:26:08.558 13:10:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:08.558 13:10:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:08.558 13:10:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:08.558 13:10:12 -- common/autotest_common.sh@887 -- # local i 00:26:08.558 13:10:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:08.558 13:10:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:08.558 13:10:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.817 13:10:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:09.076 [ 00:26:09.076 { 00:26:09.076 "name": "BaseBdev1", 00:26:09.076 "aliases": [ 00:26:09.076 "b6a5d9ab-9aa8-48a6-8e42-d4e3d7c724d9" 00:26:09.076 ], 00:26:09.076 "product_name": "Malloc disk", 00:26:09.076 "block_size": 512, 00:26:09.076 "num_blocks": 65536, 00:26:09.076 "uuid": "b6a5d9ab-9aa8-48a6-8e42-d4e3d7c724d9", 00:26:09.076 "assigned_rate_limits": { 00:26:09.076 "rw_ios_per_sec": 0, 00:26:09.076 "rw_mbytes_per_sec": 0, 00:26:09.076 "r_mbytes_per_sec": 0, 00:26:09.076 "w_mbytes_per_sec": 0 00:26:09.076 }, 00:26:09.076 "claimed": true, 00:26:09.076 "claim_type": "exclusive_write", 00:26:09.076 "zoned": false, 00:26:09.076 "supported_io_types": { 00:26:09.076 "read": true, 00:26:09.076 "write": true, 00:26:09.076 "unmap": true, 00:26:09.076 "write_zeroes": true, 00:26:09.076 "flush": true, 00:26:09.076 "reset": true, 00:26:09.076 "compare": false, 00:26:09.076 "compare_and_write": false, 00:26:09.076 "abort": true, 00:26:09.076 "nvme_admin": false, 00:26:09.076 "nvme_io": false 00:26:09.076 }, 00:26:09.076 "memory_domains": [ 00:26:09.076 { 00:26:09.076 "dma_device_id": "system", 00:26:09.076 "dma_device_type": 1 00:26:09.076 }, 00:26:09.076 { 00:26:09.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.077 "dma_device_type": 2 00:26:09.077 } 00:26:09.077 ], 00:26:09.077 "driver_specific": {} 00:26:09.077 } 00:26:09.077 ] 00:26:09.077 13:10:12 -- common/autotest_common.sh@893 -- # return 0 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.077 13:10:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.077 13:10:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:09.077 "name": "Existed_Raid", 00:26:09.077 "uuid": "075fc608-5a52-4515-aab4-434ffc5d4ea4", 00:26:09.077 "strip_size_kb": 64, 00:26:09.077 "state": "configuring", 00:26:09.077 "raid_level": "raid5f", 00:26:09.077 "superblock": true, 00:26:09.077 "num_base_bdevs": 3, 00:26:09.077 "num_base_bdevs_discovered": 1, 00:26:09.077 "num_base_bdevs_operational": 3, 00:26:09.077 "base_bdevs_list": [ 00:26:09.077 { 00:26:09.077 "name": "BaseBdev1", 00:26:09.077 "uuid": "b6a5d9ab-9aa8-48a6-8e42-d4e3d7c724d9", 00:26:09.077 "is_configured": true, 00:26:09.077 "data_offset": 2048, 00:26:09.077 "data_size": 63488 00:26:09.077 }, 00:26:09.077 { 00:26:09.077 "name": "BaseBdev2", 00:26:09.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.077 "is_configured": false, 00:26:09.077 "data_offset": 0, 00:26:09.077 "data_size": 0 00:26:09.077 }, 00:26:09.077 { 00:26:09.077 "name": "BaseBdev3", 00:26:09.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.077 "is_configured": false, 00:26:09.077 "data_offset": 0, 00:26:09.077 "data_size": 0 00:26:09.077 } 00:26:09.077 ] 00:26:09.077 }' 00:26:09.077 13:10:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:09.077 13:10:13 -- common/autotest_common.sh@10 -- # set +x 00:26:10.014 13:10:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:10.285 [2024-04-17 13:10:14.214342] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:10.285 [2024-04-17 13:10:14.214418] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:10.285 13:10:14 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:10.285 13:10:14 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:10.556 13:10:14 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:10.814 BaseBdev1 00:26:10.814 13:10:14 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:10.814 13:10:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:10.814 13:10:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:10.814 13:10:14 -- common/autotest_common.sh@887 -- # local i 00:26:10.814 13:10:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:10.814 13:10:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:10.814 13:10:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.072 13:10:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:11.331 [ 00:26:11.331 { 00:26:11.331 "name": "BaseBdev1", 00:26:11.331 "aliases": [ 00:26:11.331 "6b59d50d-7908-4d64-ae8e-e2a1f302a521" 00:26:11.331 ], 00:26:11.331 "product_name": "Malloc disk", 00:26:11.331 "block_size": 512, 00:26:11.331 "num_blocks": 65536, 00:26:11.331 "uuid": "6b59d50d-7908-4d64-ae8e-e2a1f302a521", 00:26:11.331 "assigned_rate_limits": { 00:26:11.331 "rw_ios_per_sec": 0, 00:26:11.331 "rw_mbytes_per_sec": 0, 00:26:11.331 "r_mbytes_per_sec": 0, 00:26:11.331 "w_mbytes_per_sec": 0 00:26:11.331 }, 00:26:11.331 "claimed": false, 00:26:11.331 "zoned": false, 00:26:11.331 "supported_io_types": { 00:26:11.331 "read": true, 00:26:11.331 "write": true, 00:26:11.331 "unmap": true, 00:26:11.331 "write_zeroes": true, 00:26:11.331 "flush": true, 00:26:11.331 "reset": true, 00:26:11.331 "compare": false, 00:26:11.331 "compare_and_write": false, 00:26:11.331 "abort": true, 00:26:11.331 "nvme_admin": false, 00:26:11.331 "nvme_io": false 00:26:11.331 }, 00:26:11.331 "memory_domains": [ 00:26:11.331 { 00:26:11.331 "dma_device_id": "system", 00:26:11.331 "dma_device_type": 1 00:26:11.331 }, 00:26:11.331 { 00:26:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.331 "dma_device_type": 2 00:26:11.331 } 00:26:11.331 ], 00:26:11.331 "driver_specific": {} 00:26:11.331 } 00:26:11.331 ] 00:26:11.331 13:10:15 -- common/autotest_common.sh@893 -- # return 0 00:26:11.331 13:10:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:11.590 [2024-04-17 13:10:15.532857] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.590 [2024-04-17 13:10:15.535108] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.590 [2024-04-17 13:10:15.535189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.590 [2024-04-17 13:10:15.535218] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.590 [2024-04-17 13:10:15.535244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.591 13:10:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.850 13:10:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.850 "name": "Existed_Raid", 00:26:11.850 "uuid": "d10f7696-2e58-42ef-8bd4-d8a65b5af1df", 00:26:11.850 "strip_size_kb": 64, 00:26:11.850 "state": "configuring", 00:26:11.850 "raid_level": "raid5f", 00:26:11.850 "superblock": true, 00:26:11.850 "num_base_bdevs": 3, 00:26:11.850 "num_base_bdevs_discovered": 1, 00:26:11.850 "num_base_bdevs_operational": 3, 00:26:11.850 "base_bdevs_list": [ 00:26:11.850 { 00:26:11.850 "name": "BaseBdev1", 00:26:11.850 "uuid": "6b59d50d-7908-4d64-ae8e-e2a1f302a521", 00:26:11.850 "is_configured": true, 00:26:11.850 "data_offset": 2048, 00:26:11.850 "data_size": 63488 00:26:11.850 }, 00:26:11.850 { 00:26:11.850 "name": "BaseBdev2", 00:26:11.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.850 "is_configured": false, 00:26:11.850 "data_offset": 0, 00:26:11.850 "data_size": 0 00:26:11.850 }, 00:26:11.850 { 00:26:11.850 "name": "BaseBdev3", 00:26:11.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.850 "is_configured": false, 00:26:11.850 "data_offset": 0, 00:26:11.850 "data_size": 0 00:26:11.850 } 00:26:11.850 ] 00:26:11.850 }' 00:26:11.850 13:10:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.850 13:10:15 -- common/autotest_common.sh@10 -- # set +x 00:26:12.417 13:10:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:12.676 [2024-04-17 13:10:16.769675] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.676 BaseBdev2 00:26:12.676 13:10:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:12.676 13:10:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:12.676 13:10:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:12.676 13:10:16 -- common/autotest_common.sh@887 -- # local i 00:26:12.676 13:10:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:12.676 13:10:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:12.676 13:10:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.244 13:10:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:13.244 [ 00:26:13.244 { 00:26:13.244 "name": "BaseBdev2", 00:26:13.244 "aliases": [ 00:26:13.244 "908f36e6-316d-493b-bb80-7ac4983acc32" 00:26:13.244 ], 00:26:13.244 "product_name": "Malloc disk", 00:26:13.244 "block_size": 512, 00:26:13.244 "num_blocks": 65536, 00:26:13.244 "uuid": "908f36e6-316d-493b-bb80-7ac4983acc32", 00:26:13.244 "assigned_rate_limits": { 00:26:13.244 "rw_ios_per_sec": 0, 00:26:13.244 "rw_mbytes_per_sec": 0, 00:26:13.244 "r_mbytes_per_sec": 0, 00:26:13.244 "w_mbytes_per_sec": 0 00:26:13.244 }, 00:26:13.244 "claimed": true, 00:26:13.244 "claim_type": "exclusive_write", 00:26:13.244 "zoned": false, 00:26:13.244 "supported_io_types": { 00:26:13.244 "read": true, 00:26:13.244 "write": true, 00:26:13.244 "unmap": true, 00:26:13.244 "write_zeroes": true, 00:26:13.244 "flush": true, 00:26:13.244 "reset": true, 00:26:13.244 "compare": false, 00:26:13.244 "compare_and_write": false, 00:26:13.244 "abort": true, 00:26:13.244 "nvme_admin": false, 00:26:13.244 "nvme_io": false 00:26:13.244 }, 00:26:13.244 "memory_domains": [ 00:26:13.244 { 00:26:13.244 "dma_device_id": "system", 00:26:13.244 "dma_device_type": 1 00:26:13.244 }, 00:26:13.244 { 00:26:13.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.244 "dma_device_type": 2 00:26:13.244 } 00:26:13.244 ], 00:26:13.244 "driver_specific": {} 00:26:13.244 } 00:26:13.244 ] 00:26:13.502 13:10:17 -- common/autotest_common.sh@893 -- # return 0 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.502 13:10:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.761 13:10:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.761 "name": "Existed_Raid", 00:26:13.761 "uuid": "d10f7696-2e58-42ef-8bd4-d8a65b5af1df", 00:26:13.761 "strip_size_kb": 64, 00:26:13.761 "state": "configuring", 00:26:13.761 "raid_level": "raid5f", 00:26:13.761 "superblock": true, 00:26:13.761 "num_base_bdevs": 3, 00:26:13.761 "num_base_bdevs_discovered": 2, 00:26:13.761 "num_base_bdevs_operational": 3, 00:26:13.761 "base_bdevs_list": [ 00:26:13.761 { 00:26:13.761 "name": "BaseBdev1", 00:26:13.761 "uuid": "6b59d50d-7908-4d64-ae8e-e2a1f302a521", 00:26:13.761 "is_configured": true, 00:26:13.761 "data_offset": 2048, 00:26:13.761 "data_size": 63488 00:26:13.761 }, 00:26:13.761 { 00:26:13.761 "name": "BaseBdev2", 00:26:13.761 "uuid": "908f36e6-316d-493b-bb80-7ac4983acc32", 00:26:13.761 "is_configured": true, 00:26:13.761 "data_offset": 2048, 00:26:13.761 "data_size": 63488 00:26:13.761 }, 00:26:13.761 { 00:26:13.761 "name": "BaseBdev3", 00:26:13.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.761 "is_configured": false, 00:26:13.761 "data_offset": 0, 00:26:13.761 "data_size": 0 00:26:13.761 } 00:26:13.761 ] 00:26:13.761 }' 00:26:13.761 13:10:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.761 13:10:17 -- common/autotest_common.sh@10 -- # set +x 00:26:14.328 13:10:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:14.586 [2024-04-17 13:10:18.673590] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:14.586 [2024-04-17 13:10:18.673939] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:26:14.586 [2024-04-17 13:10:18.673957] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:14.586 [2024-04-17 13:10:18.674126] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:26:14.586 BaseBdev3 00:26:14.586 [2024-04-17 13:10:18.679962] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:26:14.586 [2024-04-17 13:10:18.679991] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:26:14.586 [2024-04-17 13:10:18.680206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.586 13:10:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:14.586 13:10:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:14.586 13:10:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:14.586 13:10:18 -- common/autotest_common.sh@887 -- # local i 00:26:14.586 13:10:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:14.586 13:10:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:14.586 13:10:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.851 13:10:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:15.115 [ 00:26:15.115 { 00:26:15.115 "name": "BaseBdev3", 00:26:15.115 "aliases": [ 00:26:15.115 "648c191d-d2d3-473c-a3e6-6acdfde2f136" 00:26:15.115 ], 00:26:15.116 "product_name": "Malloc disk", 00:26:15.116 "block_size": 512, 00:26:15.116 "num_blocks": 65536, 00:26:15.116 "uuid": "648c191d-d2d3-473c-a3e6-6acdfde2f136", 00:26:15.116 "assigned_rate_limits": { 00:26:15.116 "rw_ios_per_sec": 0, 00:26:15.116 "rw_mbytes_per_sec": 0, 00:26:15.116 "r_mbytes_per_sec": 0, 00:26:15.116 "w_mbytes_per_sec": 0 00:26:15.116 }, 00:26:15.116 "claimed": true, 00:26:15.116 "claim_type": "exclusive_write", 00:26:15.116 "zoned": false, 00:26:15.116 "supported_io_types": { 00:26:15.116 "read": true, 00:26:15.116 "write": true, 00:26:15.116 "unmap": true, 00:26:15.116 "write_zeroes": true, 00:26:15.116 "flush": true, 00:26:15.116 "reset": true, 00:26:15.116 "compare": false, 00:26:15.116 "compare_and_write": false, 00:26:15.116 "abort": true, 00:26:15.116 "nvme_admin": false, 00:26:15.116 "nvme_io": false 00:26:15.116 }, 00:26:15.116 "memory_domains": [ 00:26:15.116 { 00:26:15.116 "dma_device_id": "system", 00:26:15.116 "dma_device_type": 1 00:26:15.116 }, 00:26:15.116 { 00:26:15.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.116 "dma_device_type": 2 00:26:15.116 } 00:26:15.116 ], 00:26:15.116 "driver_specific": {} 00:26:15.116 } 00:26:15.116 ] 00:26:15.116 13:10:19 -- common/autotest_common.sh@893 -- # return 0 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.116 13:10:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.374 13:10:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.374 "name": "Existed_Raid", 00:26:15.374 "uuid": "d10f7696-2e58-42ef-8bd4-d8a65b5af1df", 00:26:15.374 "strip_size_kb": 64, 00:26:15.374 "state": "online", 00:26:15.374 "raid_level": "raid5f", 00:26:15.374 "superblock": true, 00:26:15.374 "num_base_bdevs": 3, 00:26:15.374 "num_base_bdevs_discovered": 3, 00:26:15.374 "num_base_bdevs_operational": 3, 00:26:15.374 "base_bdevs_list": [ 00:26:15.374 { 00:26:15.374 "name": "BaseBdev1", 00:26:15.374 "uuid": "6b59d50d-7908-4d64-ae8e-e2a1f302a521", 00:26:15.374 "is_configured": true, 00:26:15.374 "data_offset": 2048, 00:26:15.374 "data_size": 63488 00:26:15.374 }, 00:26:15.374 { 00:26:15.374 "name": "BaseBdev2", 00:26:15.374 "uuid": "908f36e6-316d-493b-bb80-7ac4983acc32", 00:26:15.374 "is_configured": true, 00:26:15.374 "data_offset": 2048, 00:26:15.374 "data_size": 63488 00:26:15.374 }, 00:26:15.374 { 00:26:15.374 "name": "BaseBdev3", 00:26:15.374 "uuid": "648c191d-d2d3-473c-a3e6-6acdfde2f136", 00:26:15.374 "is_configured": true, 00:26:15.374 "data_offset": 2048, 00:26:15.374 "data_size": 63488 00:26:15.374 } 00:26:15.374 ] 00:26:15.374 }' 00:26:15.374 13:10:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.374 13:10:19 -- common/autotest_common.sh@10 -- # set +x 00:26:16.310 13:10:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:16.310 [2024-04-17 13:10:20.403356] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:16.568 13:10:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.569 13:10:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.828 13:10:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.828 "name": "Existed_Raid", 00:26:16.828 "uuid": "d10f7696-2e58-42ef-8bd4-d8a65b5af1df", 00:26:16.828 "strip_size_kb": 64, 00:26:16.828 "state": "online", 00:26:16.828 "raid_level": "raid5f", 00:26:16.828 "superblock": true, 00:26:16.828 "num_base_bdevs": 3, 00:26:16.828 "num_base_bdevs_discovered": 2, 00:26:16.828 "num_base_bdevs_operational": 2, 00:26:16.828 "base_bdevs_list": [ 00:26:16.828 { 00:26:16.828 "name": null, 00:26:16.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.828 "is_configured": false, 00:26:16.828 "data_offset": 2048, 00:26:16.828 "data_size": 63488 00:26:16.828 }, 00:26:16.828 { 00:26:16.828 "name": "BaseBdev2", 00:26:16.828 "uuid": "908f36e6-316d-493b-bb80-7ac4983acc32", 00:26:16.828 "is_configured": true, 00:26:16.828 "data_offset": 2048, 00:26:16.828 "data_size": 63488 00:26:16.828 }, 00:26:16.828 { 00:26:16.828 "name": "BaseBdev3", 00:26:16.828 "uuid": "648c191d-d2d3-473c-a3e6-6acdfde2f136", 00:26:16.828 "is_configured": true, 00:26:16.828 "data_offset": 2048, 00:26:16.828 "data_size": 63488 00:26:16.828 } 00:26:16.828 ] 00:26:16.828 }' 00:26:16.828 13:10:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.828 13:10:20 -- common/autotest_common.sh@10 -- # set +x 00:26:17.395 13:10:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:17.395 13:10:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.395 13:10:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.395 13:10:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:17.653 13:10:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:17.653 13:10:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.653 13:10:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:17.911 [2024-04-17 13:10:21.933289] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:17.911 [2024-04-17 13:10:21.933449] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.911 [2024-04-17 13:10:22.018348] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.911 13:10:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:17.911 13:10:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.911 13:10:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.912 13:10:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:18.170 13:10:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:18.170 13:10:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:18.170 13:10:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:18.429 [2024-04-17 13:10:22.556096] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:18.429 [2024-04-17 13:10:22.556195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:26:18.687 13:10:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:18.687 13:10:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:18.687 13:10:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.687 13:10:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:18.946 13:10:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:18.946 13:10:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:18.946 13:10:22 -- bdev/bdev_raid.sh@287 -- # killprocess 136059 00:26:18.946 13:10:22 -- common/autotest_common.sh@924 -- # '[' -z 136059 ']' 00:26:18.946 13:10:22 -- common/autotest_common.sh@928 -- # kill -0 136059 00:26:18.947 13:10:22 -- common/autotest_common.sh@929 -- # uname 00:26:18.947 13:10:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:18.947 13:10:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 136059 00:26:18.947 killing process with pid 136059 00:26:18.947 13:10:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:18.947 13:10:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:18.947 13:10:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 136059' 00:26:18.947 13:10:22 -- common/autotest_common.sh@943 -- # kill 136059 00:26:18.947 13:10:22 -- common/autotest_common.sh@948 -- # wait 136059 00:26:18.947 [2024-04-17 13:10:22.924064] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.947 [2024-04-17 13:10:22.924463] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.324 ************************************ 00:26:20.324 END TEST raid5f_state_function_test_sb 00:26:20.324 ************************************ 00:26:20.324 13:10:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:20.324 00:26:20.325 real 0m14.768s 00:26:20.325 user 0m26.218s 00:26:20.325 sys 0m1.695s 00:26:20.325 13:10:24 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:26:20.325 13:10:24 -- common/autotest_common.sh@10 -- # set +x 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:26:20.325 13:10:24 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:26:20.325 13:10:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:20.325 13:10:24 -- common/autotest_common.sh@10 -- # set +x 00:26:20.325 ************************************ 00:26:20.325 START TEST raid5f_superblock_test 00:26:20.325 ************************************ 00:26:20.325 13:10:24 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid5f 3 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@357 -- # raid_pid=136483 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:20.325 13:10:24 -- bdev/bdev_raid.sh@358 -- # waitforlisten 136483 /var/tmp/spdk-raid.sock 00:26:20.325 13:10:24 -- common/autotest_common.sh@817 -- # '[' -z 136483 ']' 00:26:20.325 13:10:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:20.325 13:10:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.325 13:10:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:20.325 13:10:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.325 13:10:24 -- common/autotest_common.sh@10 -- # set +x 00:26:20.325 [2024-04-17 13:10:24.228345] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:26:20.325 [2024-04-17 13:10:24.228529] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136483 ] 00:26:20.325 [2024-04-17 13:10:24.396466] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.584 [2024-04-17 13:10:24.613639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.843 [2024-04-17 13:10:24.809980] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.102 13:10:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:21.102 13:10:25 -- common/autotest_common.sh@850 -- # return 0 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:21.102 13:10:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:21.671 malloc1 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:21.671 [2024-04-17 13:10:25.741267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:21.671 [2024-04-17 13:10:25.741372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.671 [2024-04-17 13:10:25.741412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:21.671 [2024-04-17 13:10:25.741472] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.671 [2024-04-17 13:10:25.744011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.671 [2024-04-17 13:10:25.744060] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:21.671 pt1 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:21.671 13:10:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:21.929 malloc2 00:26:21.929 13:10:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:22.207 [2024-04-17 13:10:26.242083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:22.207 [2024-04-17 13:10:26.242179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.207 [2024-04-17 13:10:26.242227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:22.207 [2024-04-17 13:10:26.242287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.207 [2024-04-17 13:10:26.244827] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.207 [2024-04-17 13:10:26.244874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:22.207 pt2 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:22.207 13:10:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:22.466 malloc3 00:26:22.466 13:10:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:22.725 [2024-04-17 13:10:26.760865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:22.725 [2024-04-17 13:10:26.760960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.725 [2024-04-17 13:10:26.761005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:22.725 [2024-04-17 13:10:26.761056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.725 [2024-04-17 13:10:26.763523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.725 [2024-04-17 13:10:26.763597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:22.725 pt3 00:26:22.725 13:10:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:22.725 13:10:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:22.725 13:10:26 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:22.984 [2024-04-17 13:10:26.984956] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:22.984 [2024-04-17 13:10:26.987082] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:22.984 [2024-04-17 13:10:26.987164] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:22.984 [2024-04-17 13:10:26.987404] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:26:22.984 [2024-04-17 13:10:26.987429] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:22.984 [2024-04-17 13:10:26.987585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:22.984 [2024-04-17 13:10:26.992753] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:26:22.984 [2024-04-17 13:10:26.992782] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:26:22.984 [2024-04-17 13:10:26.992979] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:22.984 13:10:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:22.984 13:10:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.984 13:10:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.243 13:10:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.243 "name": "raid_bdev1", 00:26:23.243 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:23.243 "strip_size_kb": 64, 00:26:23.243 "state": "online", 00:26:23.243 "raid_level": "raid5f", 00:26:23.243 "superblock": true, 00:26:23.243 "num_base_bdevs": 3, 00:26:23.243 "num_base_bdevs_discovered": 3, 00:26:23.243 "num_base_bdevs_operational": 3, 00:26:23.243 "base_bdevs_list": [ 00:26:23.243 { 00:26:23.243 "name": "pt1", 00:26:23.243 "uuid": "c984aed9-e9aa-5842-8b99-45d28c9c3926", 00:26:23.243 "is_configured": true, 00:26:23.243 "data_offset": 2048, 00:26:23.243 "data_size": 63488 00:26:23.243 }, 00:26:23.243 { 00:26:23.243 "name": "pt2", 00:26:23.243 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:23.243 "is_configured": true, 00:26:23.243 "data_offset": 2048, 00:26:23.243 "data_size": 63488 00:26:23.243 }, 00:26:23.243 { 00:26:23.243 "name": "pt3", 00:26:23.243 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:23.243 "is_configured": true, 00:26:23.243 "data_offset": 2048, 00:26:23.243 "data_size": 63488 00:26:23.243 } 00:26:23.243 ] 00:26:23.243 }' 00:26:23.243 13:10:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.243 13:10:27 -- common/autotest_common.sh@10 -- # set +x 00:26:24.184 13:10:27 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:24.184 13:10:27 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:24.184 [2024-04-17 13:10:28.250872] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.184 13:10:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=22ac2041-149b-4700-a610-f16aeaf0699f 00:26:24.184 13:10:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 22ac2041-149b-4700-a610-f16aeaf0699f ']' 00:26:24.184 13:10:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:24.444 [2024-04-17 13:10:28.514757] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.444 [2024-04-17 13:10:28.514819] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:24.444 [2024-04-17 13:10:28.514929] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.444 [2024-04-17 13:10:28.515027] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.444 [2024-04-17 13:10:28.515040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:26:24.444 13:10:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.444 13:10:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:24.723 13:10:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:24.723 13:10:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:24.723 13:10:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:24.723 13:10:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:25.007 13:10:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.007 13:10:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:25.267 13:10:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.267 13:10:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:25.527 13:10:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:25.527 13:10:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:25.787 13:10:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:25.787 13:10:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:25.787 13:10:29 -- common/autotest_common.sh@638 -- # local es=0 00:26:25.787 13:10:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:25.787 13:10:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.787 13:10:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.787 13:10:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.787 13:10:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.787 13:10:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.787 13:10:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.787 13:10:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.787 13:10:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:25.787 13:10:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:26.046 [2024-04-17 13:10:30.055145] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:26.046 [2024-04-17 13:10:30.057319] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:26.046 [2024-04-17 13:10:30.057383] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:26.046 [2024-04-17 13:10:30.057439] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:26.046 [2024-04-17 13:10:30.057539] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:26.046 [2024-04-17 13:10:30.057586] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:26.046 [2024-04-17 13:10:30.057636] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:26.046 [2024-04-17 13:10:30.057657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:26:26.046 request: 00:26:26.046 { 00:26:26.046 "name": "raid_bdev1", 00:26:26.046 "raid_level": "raid5f", 00:26:26.046 "base_bdevs": [ 00:26:26.046 "malloc1", 00:26:26.046 "malloc2", 00:26:26.046 "malloc3" 00:26:26.046 ], 00:26:26.046 "superblock": false, 00:26:26.046 "strip_size_kb": 64, 00:26:26.046 "method": "bdev_raid_create", 00:26:26.046 "req_id": 1 00:26:26.046 } 00:26:26.046 Got JSON-RPC error response 00:26:26.046 response: 00:26:26.046 { 00:26:26.046 "code": -17, 00:26:26.046 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:26.046 } 00:26:26.046 13:10:30 -- common/autotest_common.sh@641 -- # es=1 00:26:26.046 13:10:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:26.046 13:10:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:26.046 13:10:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:26.046 13:10:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.046 13:10:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:26.304 13:10:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:26.304 13:10:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:26.304 13:10:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:26.563 [2024-04-17 13:10:30.539156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:26.563 [2024-04-17 13:10:30.539273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.563 [2024-04-17 13:10:30.539322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:26.563 [2024-04-17 13:10:30.539344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.563 [2024-04-17 13:10:30.541881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.563 [2024-04-17 13:10:30.541947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:26.563 [2024-04-17 13:10:30.542099] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:26.563 [2024-04-17 13:10:30.542164] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:26.563 pt1 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.563 13:10:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.833 13:10:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.833 "name": "raid_bdev1", 00:26:26.833 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:26.833 "strip_size_kb": 64, 00:26:26.833 "state": "configuring", 00:26:26.833 "raid_level": "raid5f", 00:26:26.833 "superblock": true, 00:26:26.833 "num_base_bdevs": 3, 00:26:26.833 "num_base_bdevs_discovered": 1, 00:26:26.833 "num_base_bdevs_operational": 3, 00:26:26.833 "base_bdevs_list": [ 00:26:26.833 { 00:26:26.833 "name": "pt1", 00:26:26.834 "uuid": "c984aed9-e9aa-5842-8b99-45d28c9c3926", 00:26:26.834 "is_configured": true, 00:26:26.834 "data_offset": 2048, 00:26:26.834 "data_size": 63488 00:26:26.834 }, 00:26:26.834 { 00:26:26.834 "name": null, 00:26:26.834 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:26.834 "is_configured": false, 00:26:26.834 "data_offset": 2048, 00:26:26.834 "data_size": 63488 00:26:26.834 }, 00:26:26.834 { 00:26:26.834 "name": null, 00:26:26.834 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:26.834 "is_configured": false, 00:26:26.834 "data_offset": 2048, 00:26:26.834 "data_size": 63488 00:26:26.834 } 00:26:26.834 ] 00:26:26.834 }' 00:26:26.834 13:10:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.834 13:10:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.776 13:10:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:26:27.776 13:10:31 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:27.776 [2024-04-17 13:10:31.839522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:27.776 [2024-04-17 13:10:31.839670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.776 [2024-04-17 13:10:31.839726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:27.776 [2024-04-17 13:10:31.839749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.776 [2024-04-17 13:10:31.840272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.776 [2024-04-17 13:10:31.840316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:27.776 [2024-04-17 13:10:31.840443] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:27.776 [2024-04-17 13:10:31.840472] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:27.776 pt2 00:26:27.776 13:10:31 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:28.034 [2024-04-17 13:10:32.075622] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.034 13:10:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.293 13:10:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.293 "name": "raid_bdev1", 00:26:28.293 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:28.294 "strip_size_kb": 64, 00:26:28.294 "state": "configuring", 00:26:28.294 "raid_level": "raid5f", 00:26:28.294 "superblock": true, 00:26:28.294 "num_base_bdevs": 3, 00:26:28.294 "num_base_bdevs_discovered": 1, 00:26:28.294 "num_base_bdevs_operational": 3, 00:26:28.294 "base_bdevs_list": [ 00:26:28.294 { 00:26:28.294 "name": "pt1", 00:26:28.294 "uuid": "c984aed9-e9aa-5842-8b99-45d28c9c3926", 00:26:28.294 "is_configured": true, 00:26:28.294 "data_offset": 2048, 00:26:28.294 "data_size": 63488 00:26:28.294 }, 00:26:28.294 { 00:26:28.294 "name": null, 00:26:28.294 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:28.294 "is_configured": false, 00:26:28.294 "data_offset": 2048, 00:26:28.294 "data_size": 63488 00:26:28.294 }, 00:26:28.294 { 00:26:28.294 "name": null, 00:26:28.294 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:28.294 "is_configured": false, 00:26:28.294 "data_offset": 2048, 00:26:28.294 "data_size": 63488 00:26:28.294 } 00:26:28.294 ] 00:26:28.294 }' 00:26:28.294 13:10:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.294 13:10:32 -- common/autotest_common.sh@10 -- # set +x 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:29.231 [2024-04-17 13:10:33.275937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:29.231 [2024-04-17 13:10:33.276090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.231 [2024-04-17 13:10:33.276134] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:29.231 [2024-04-17 13:10:33.276162] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.231 [2024-04-17 13:10:33.276714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.231 [2024-04-17 13:10:33.276758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:29.231 [2024-04-17 13:10:33.276879] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:29.231 [2024-04-17 13:10:33.276937] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:29.231 pt2 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:29.231 13:10:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:29.489 [2024-04-17 13:10:33.571993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:29.489 [2024-04-17 13:10:33.572083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.489 [2024-04-17 13:10:33.572124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:29.489 [2024-04-17 13:10:33.572153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.489 [2024-04-17 13:10:33.572658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.489 [2024-04-17 13:10:33.572717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:29.489 [2024-04-17 13:10:33.572844] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:29.489 [2024-04-17 13:10:33.572874] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:29.489 [2024-04-17 13:10:33.573023] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:26:29.489 [2024-04-17 13:10:33.573036] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:29.489 [2024-04-17 13:10:33.573150] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:29.489 [2024-04-17 13:10:33.578071] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:26:29.489 [2024-04-17 13:10:33.578100] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:26:29.489 [2024-04-17 13:10:33.578288] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.489 pt3 00:26:29.489 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:29.489 13:10:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.490 13:10:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.815 13:10:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.815 "name": "raid_bdev1", 00:26:29.815 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:29.815 "strip_size_kb": 64, 00:26:29.815 "state": "online", 00:26:29.815 "raid_level": "raid5f", 00:26:29.815 "superblock": true, 00:26:29.815 "num_base_bdevs": 3, 00:26:29.815 "num_base_bdevs_discovered": 3, 00:26:29.815 "num_base_bdevs_operational": 3, 00:26:29.815 "base_bdevs_list": [ 00:26:29.815 { 00:26:29.815 "name": "pt1", 00:26:29.815 "uuid": "c984aed9-e9aa-5842-8b99-45d28c9c3926", 00:26:29.815 "is_configured": true, 00:26:29.815 "data_offset": 2048, 00:26:29.815 "data_size": 63488 00:26:29.815 }, 00:26:29.815 { 00:26:29.815 "name": "pt2", 00:26:29.815 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:29.815 "is_configured": true, 00:26:29.815 "data_offset": 2048, 00:26:29.815 "data_size": 63488 00:26:29.815 }, 00:26:29.815 { 00:26:29.815 "name": "pt3", 00:26:29.815 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:29.815 "is_configured": true, 00:26:29.815 "data_offset": 2048, 00:26:29.815 "data_size": 63488 00:26:29.815 } 00:26:29.815 ] 00:26:29.815 }' 00:26:29.815 13:10:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.815 13:10:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:30.764 [2024-04-17 13:10:34.848271] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@430 -- # '[' 22ac2041-149b-4700-a610-f16aeaf0699f '!=' 22ac2041-149b-4700-a610-f16aeaf0699f ']' 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:30.764 13:10:34 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:31.022 [2024-04-17 13:10:35.140155] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.022 13:10:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.281 13:10:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.281 "name": "raid_bdev1", 00:26:31.281 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:31.281 "strip_size_kb": 64, 00:26:31.281 "state": "online", 00:26:31.281 "raid_level": "raid5f", 00:26:31.281 "superblock": true, 00:26:31.281 "num_base_bdevs": 3, 00:26:31.281 "num_base_bdevs_discovered": 2, 00:26:31.281 "num_base_bdevs_operational": 2, 00:26:31.281 "base_bdevs_list": [ 00:26:31.281 { 00:26:31.281 "name": null, 00:26:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.281 "is_configured": false, 00:26:31.281 "data_offset": 2048, 00:26:31.281 "data_size": 63488 00:26:31.281 }, 00:26:31.281 { 00:26:31.281 "name": "pt2", 00:26:31.281 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:31.281 "is_configured": true, 00:26:31.281 "data_offset": 2048, 00:26:31.281 "data_size": 63488 00:26:31.281 }, 00:26:31.281 { 00:26:31.281 "name": "pt3", 00:26:31.281 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:31.281 "is_configured": true, 00:26:31.281 "data_offset": 2048, 00:26:31.281 "data_size": 63488 00:26:31.281 } 00:26:31.281 ] 00:26:31.281 }' 00:26:31.281 13:10:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.281 13:10:35 -- common/autotest_common.sh@10 -- # set +x 00:26:32.218 13:10:36 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:32.477 [2024-04-17 13:10:36.404558] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.477 [2024-04-17 13:10:36.404657] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.477 [2024-04-17 13:10:36.404749] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.477 [2024-04-17 13:10:36.404818] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.477 [2024-04-17 13:10:36.404831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:26:32.477 13:10:36 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.477 13:10:36 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:32.734 13:10:36 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:32.734 13:10:36 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:32.734 13:10:36 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:32.734 13:10:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.734 13:10:36 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:32.992 13:10:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:32.993 13:10:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.993 13:10:36 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:33.251 13:10:37 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:33.251 13:10:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:33.251 13:10:37 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:33.251 13:10:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:33.251 13:10:37 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:33.509 [2024-04-17 13:10:37.404798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:33.509 [2024-04-17 13:10:37.404908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.509 [2024-04-17 13:10:37.404954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:33.509 [2024-04-17 13:10:37.404981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.509 [2024-04-17 13:10:37.407453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.509 [2024-04-17 13:10:37.407506] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:33.509 [2024-04-17 13:10:37.407654] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:33.509 [2024-04-17 13:10:37.407730] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:33.509 pt2 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.509 13:10:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.768 13:10:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.768 "name": "raid_bdev1", 00:26:33.768 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:33.768 "strip_size_kb": 64, 00:26:33.768 "state": "configuring", 00:26:33.768 "raid_level": "raid5f", 00:26:33.768 "superblock": true, 00:26:33.768 "num_base_bdevs": 3, 00:26:33.768 "num_base_bdevs_discovered": 1, 00:26:33.768 "num_base_bdevs_operational": 2, 00:26:33.768 "base_bdevs_list": [ 00:26:33.768 { 00:26:33.768 "name": null, 00:26:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.768 "is_configured": false, 00:26:33.768 "data_offset": 2048, 00:26:33.768 "data_size": 63488 00:26:33.768 }, 00:26:33.768 { 00:26:33.768 "name": "pt2", 00:26:33.768 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:33.768 "is_configured": true, 00:26:33.768 "data_offset": 2048, 00:26:33.768 "data_size": 63488 00:26:33.768 }, 00:26:33.768 { 00:26:33.768 "name": null, 00:26:33.768 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:33.768 "is_configured": false, 00:26:33.768 "data_offset": 2048, 00:26:33.768 "data_size": 63488 00:26:33.768 } 00:26:33.768 ] 00:26:33.768 }' 00:26:33.768 13:10:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.768 13:10:37 -- common/autotest_common.sh@10 -- # set +x 00:26:34.334 13:10:38 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:34.334 13:10:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:34.334 13:10:38 -- bdev/bdev_raid.sh@462 -- # i=2 00:26:34.334 13:10:38 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:34.593 [2024-04-17 13:10:38.641111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:34.593 [2024-04-17 13:10:38.641231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.593 [2024-04-17 13:10:38.641291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:34.593 [2024-04-17 13:10:38.641319] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.593 [2024-04-17 13:10:38.641858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.593 [2024-04-17 13:10:38.641894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:34.593 [2024-04-17 13:10:38.642025] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:34.593 [2024-04-17 13:10:38.642058] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:34.593 [2024-04-17 13:10:38.642196] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:26:34.593 [2024-04-17 13:10:38.642211] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:34.593 [2024-04-17 13:10:38.642305] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:34.593 [2024-04-17 13:10:38.647302] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:26:34.593 [2024-04-17 13:10:38.647333] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:26:34.593 [2024-04-17 13:10:38.647643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.593 pt3 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.593 13:10:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.851 13:10:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:34.851 "name": "raid_bdev1", 00:26:34.851 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:34.851 "strip_size_kb": 64, 00:26:34.851 "state": "online", 00:26:34.851 "raid_level": "raid5f", 00:26:34.851 "superblock": true, 00:26:34.851 "num_base_bdevs": 3, 00:26:34.851 "num_base_bdevs_discovered": 2, 00:26:34.851 "num_base_bdevs_operational": 2, 00:26:34.851 "base_bdevs_list": [ 00:26:34.851 { 00:26:34.851 "name": null, 00:26:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.851 "is_configured": false, 00:26:34.851 "data_offset": 2048, 00:26:34.851 "data_size": 63488 00:26:34.851 }, 00:26:34.851 { 00:26:34.852 "name": "pt2", 00:26:34.852 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:34.852 "is_configured": true, 00:26:34.852 "data_offset": 2048, 00:26:34.852 "data_size": 63488 00:26:34.852 }, 00:26:34.852 { 00:26:34.852 "name": "pt3", 00:26:34.852 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:34.852 "is_configured": true, 00:26:34.852 "data_offset": 2048, 00:26:34.852 "data_size": 63488 00:26:34.852 } 00:26:34.852 ] 00:26:34.852 }' 00:26:34.852 13:10:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:34.852 13:10:38 -- common/autotest_common.sh@10 -- # set +x 00:26:35.787 13:10:39 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:26:35.787 13:10:39 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:35.787 [2024-04-17 13:10:39.833383] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.787 [2024-04-17 13:10:39.833434] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.787 [2024-04-17 13:10:39.833520] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.787 [2024-04-17 13:10:39.833593] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.788 [2024-04-17 13:10:39.833606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:26:35.788 13:10:39 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.788 13:10:39 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:36.045 13:10:40 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:36.045 13:10:40 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:36.045 13:10:40 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:36.304 [2024-04-17 13:10:40.365498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:36.304 [2024-04-17 13:10:40.365612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.304 [2024-04-17 13:10:40.365659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:36.304 [2024-04-17 13:10:40.365683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.304 [2024-04-17 13:10:40.368273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.304 [2024-04-17 13:10:40.368330] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:36.304 [2024-04-17 13:10:40.368471] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:36.304 [2024-04-17 13:10:40.368536] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:36.304 pt1 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.304 13:10:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.563 13:10:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:36.563 "name": "raid_bdev1", 00:26:36.563 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:36.563 "strip_size_kb": 64, 00:26:36.563 "state": "configuring", 00:26:36.563 "raid_level": "raid5f", 00:26:36.563 "superblock": true, 00:26:36.563 "num_base_bdevs": 3, 00:26:36.563 "num_base_bdevs_discovered": 1, 00:26:36.563 "num_base_bdevs_operational": 3, 00:26:36.563 "base_bdevs_list": [ 00:26:36.563 { 00:26:36.563 "name": "pt1", 00:26:36.563 "uuid": "c984aed9-e9aa-5842-8b99-45d28c9c3926", 00:26:36.563 "is_configured": true, 00:26:36.563 "data_offset": 2048, 00:26:36.563 "data_size": 63488 00:26:36.563 }, 00:26:36.563 { 00:26:36.563 "name": null, 00:26:36.563 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:36.563 "is_configured": false, 00:26:36.563 "data_offset": 2048, 00:26:36.563 "data_size": 63488 00:26:36.563 }, 00:26:36.563 { 00:26:36.563 "name": null, 00:26:36.563 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:36.563 "is_configured": false, 00:26:36.563 "data_offset": 2048, 00:26:36.563 "data_size": 63488 00:26:36.563 } 00:26:36.563 ] 00:26:36.563 }' 00:26:36.563 13:10:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:36.563 13:10:40 -- common/autotest_common.sh@10 -- # set +x 00:26:37.497 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:37.497 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:37.497 13:10:41 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:37.755 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:37.755 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:37.755 13:10:41 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:38.014 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:38.014 13:10:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:38.014 13:10:41 -- bdev/bdev_raid.sh@489 -- # i=2 00:26:38.014 13:10:41 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:38.014 [2024-04-17 13:10:42.157922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:38.014 [2024-04-17 13:10:42.158033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.014 [2024-04-17 13:10:42.158076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:38.014 [2024-04-17 13:10:42.158114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.014 [2024-04-17 13:10:42.158647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.014 [2024-04-17 13:10:42.158687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:38.014 [2024-04-17 13:10:42.158810] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:38.014 [2024-04-17 13:10:42.158824] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:38.014 [2024-04-17 13:10:42.158832] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:38.014 [2024-04-17 13:10:42.158853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:26:38.014 [2024-04-17 13:10:42.158936] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:38.273 pt3 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.273 13:10:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.531 13:10:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:38.531 "name": "raid_bdev1", 00:26:38.531 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:38.531 "strip_size_kb": 64, 00:26:38.531 "state": "configuring", 00:26:38.531 "raid_level": "raid5f", 00:26:38.531 "superblock": true, 00:26:38.531 "num_base_bdevs": 3, 00:26:38.531 "num_base_bdevs_discovered": 1, 00:26:38.531 "num_base_bdevs_operational": 2, 00:26:38.531 "base_bdevs_list": [ 00:26:38.531 { 00:26:38.531 "name": null, 00:26:38.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.531 "is_configured": false, 00:26:38.531 "data_offset": 2048, 00:26:38.531 "data_size": 63488 00:26:38.531 }, 00:26:38.531 { 00:26:38.531 "name": null, 00:26:38.531 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:38.531 "is_configured": false, 00:26:38.531 "data_offset": 2048, 00:26:38.531 "data_size": 63488 00:26:38.531 }, 00:26:38.531 { 00:26:38.531 "name": "pt3", 00:26:38.531 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:38.531 "is_configured": true, 00:26:38.531 "data_offset": 2048, 00:26:38.531 "data_size": 63488 00:26:38.531 } 00:26:38.531 ] 00:26:38.531 }' 00:26:38.531 13:10:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:38.531 13:10:42 -- common/autotest_common.sh@10 -- # set +x 00:26:39.116 13:10:43 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:39.116 13:10:43 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:39.116 13:10:43 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:39.374 [2024-04-17 13:10:43.402213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:39.374 [2024-04-17 13:10:43.402324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.374 [2024-04-17 13:10:43.402363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:39.374 [2024-04-17 13:10:43.402393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.374 [2024-04-17 13:10:43.402916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.374 [2024-04-17 13:10:43.402966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:39.374 [2024-04-17 13:10:43.403073] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:39.374 [2024-04-17 13:10:43.403125] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.374 [2024-04-17 13:10:43.403262] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:26:39.374 [2024-04-17 13:10:43.403286] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:39.374 [2024-04-17 13:10:43.403391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:39.374 [2024-04-17 13:10:43.408354] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:26:39.374 [2024-04-17 13:10:43.408384] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:26:39.374 [2024-04-17 13:10:43.408626] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.374 pt2 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.374 13:10:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.632 13:10:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:39.632 "name": "raid_bdev1", 00:26:39.632 "uuid": "22ac2041-149b-4700-a610-f16aeaf0699f", 00:26:39.632 "strip_size_kb": 64, 00:26:39.632 "state": "online", 00:26:39.632 "raid_level": "raid5f", 00:26:39.632 "superblock": true, 00:26:39.632 "num_base_bdevs": 3, 00:26:39.632 "num_base_bdevs_discovered": 2, 00:26:39.632 "num_base_bdevs_operational": 2, 00:26:39.632 "base_bdevs_list": [ 00:26:39.632 { 00:26:39.632 "name": null, 00:26:39.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.632 "is_configured": false, 00:26:39.632 "data_offset": 2048, 00:26:39.632 "data_size": 63488 00:26:39.632 }, 00:26:39.632 { 00:26:39.632 "name": "pt2", 00:26:39.632 "uuid": "e95a44ac-f484-51a8-a007-aae7f6964385", 00:26:39.632 "is_configured": true, 00:26:39.632 "data_offset": 2048, 00:26:39.632 "data_size": 63488 00:26:39.632 }, 00:26:39.632 { 00:26:39.632 "name": "pt3", 00:26:39.632 "uuid": "3382e88e-f4d5-5020-a9c8-1381907b1d1d", 00:26:39.632 "is_configured": true, 00:26:39.632 "data_offset": 2048, 00:26:39.632 "data_size": 63488 00:26:39.632 } 00:26:39.632 ] 00:26:39.632 }' 00:26:39.632 13:10:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:39.632 13:10:43 -- common/autotest_common.sh@10 -- # set +x 00:26:40.566 13:10:44 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.566 13:10:44 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:40.566 [2024-04-17 13:10:44.610647] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.566 13:10:44 -- bdev/bdev_raid.sh@506 -- # '[' 22ac2041-149b-4700-a610-f16aeaf0699f '!=' 22ac2041-149b-4700-a610-f16aeaf0699f ']' 00:26:40.566 13:10:44 -- bdev/bdev_raid.sh@511 -- # killprocess 136483 00:26:40.566 13:10:44 -- common/autotest_common.sh@924 -- # '[' -z 136483 ']' 00:26:40.566 13:10:44 -- common/autotest_common.sh@928 -- # kill -0 136483 00:26:40.566 13:10:44 -- common/autotest_common.sh@929 -- # uname 00:26:40.566 13:10:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:40.566 13:10:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 136483 00:26:40.566 13:10:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:40.566 killing process with pid 136483 00:26:40.566 13:10:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:40.566 13:10:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 136483' 00:26:40.566 13:10:44 -- common/autotest_common.sh@943 -- # kill 136483 00:26:40.566 13:10:44 -- common/autotest_common.sh@948 -- # wait 136483 00:26:40.566 [2024-04-17 13:10:44.645030] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:40.566 [2024-04-17 13:10:44.645119] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:40.566 [2024-04-17 13:10:44.645186] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:40.566 [2024-04-17 13:10:44.645198] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:26:40.825 [2024-04-17 13:10:44.904206] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:42.200 00:26:42.200 real 0m21.897s 00:26:42.200 user 0m40.618s 00:26:42.200 sys 0m2.294s 00:26:42.200 13:10:46 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:26:42.200 ************************************ 00:26:42.200 END TEST raid5f_superblock_test 00:26:42.200 ************************************ 00:26:42.200 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:26:42.200 13:10:46 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:26:42.200 13:10:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:42.200 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:42.200 ************************************ 00:26:42.200 START TEST raid5f_rebuild_test 00:26:42.200 ************************************ 00:26:42.200 13:10:46 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid5f 3 false false 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=137146 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137146 /var/tmp/spdk-raid.sock 00:26:42.200 13:10:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:42.200 13:10:46 -- common/autotest_common.sh@817 -- # '[' -z 137146 ']' 00:26:42.200 13:10:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:42.200 13:10:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:42.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:42.200 13:10:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:42.200 13:10:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:42.200 13:10:46 -- common/autotest_common.sh@10 -- # set +x 00:26:42.200 [2024-04-17 13:10:46.212778] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:26:42.200 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:42.200 Zero copy mechanism will not be used. 00:26:42.200 [2024-04-17 13:10:46.213052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137146 ] 00:26:42.459 [2024-04-17 13:10:46.380610] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.717 [2024-04-17 13:10:46.627511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.717 [2024-04-17 13:10:46.844334] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:43.299 13:10:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.299 13:10:47 -- common/autotest_common.sh@850 -- # return 0 00:26:43.299 13:10:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:43.299 13:10:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:43.299 13:10:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:43.558 BaseBdev1 00:26:43.558 13:10:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:43.558 13:10:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:43.558 13:10:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:43.817 BaseBdev2 00:26:43.817 13:10:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:43.817 13:10:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:43.817 13:10:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:44.076 BaseBdev3 00:26:44.076 13:10:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:44.336 spare_malloc 00:26:44.594 13:10:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:44.852 spare_delay 00:26:44.852 13:10:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:45.112 [2024-04-17 13:10:48.998773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:45.112 [2024-04-17 13:10:48.998910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.112 [2024-04-17 13:10:48.998948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:45.112 [2024-04-17 13:10:48.998995] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.112 [2024-04-17 13:10:49.001722] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.112 [2024-04-17 13:10:49.001791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:45.112 spare 00:26:45.112 13:10:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:45.112 [2024-04-17 13:10:49.242979] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:45.112 [2024-04-17 13:10:49.245904] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:45.112 [2024-04-17 13:10:49.245967] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:45.112 [2024-04-17 13:10:49.246087] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:26:45.112 [2024-04-17 13:10:49.246104] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:45.112 [2024-04-17 13:10:49.246337] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:26:45.112 [2024-04-17 13:10:49.251883] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:26:45.112 [2024-04-17 13:10:49.251911] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:26:45.112 [2024-04-17 13:10:49.252144] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:45.372 "name": "raid_bdev1", 00:26:45.372 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:45.372 "strip_size_kb": 64, 00:26:45.372 "state": "online", 00:26:45.372 "raid_level": "raid5f", 00:26:45.372 "superblock": false, 00:26:45.372 "num_base_bdevs": 3, 00:26:45.372 "num_base_bdevs_discovered": 3, 00:26:45.372 "num_base_bdevs_operational": 3, 00:26:45.372 "base_bdevs_list": [ 00:26:45.372 { 00:26:45.372 "name": "BaseBdev1", 00:26:45.372 "uuid": "fad079c7-e96a-4348-8a03-9eb4752b3d6e", 00:26:45.372 "is_configured": true, 00:26:45.372 "data_offset": 0, 00:26:45.372 "data_size": 65536 00:26:45.372 }, 00:26:45.372 { 00:26:45.372 "name": "BaseBdev2", 00:26:45.372 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:45.372 "is_configured": true, 00:26:45.372 "data_offset": 0, 00:26:45.372 "data_size": 65536 00:26:45.372 }, 00:26:45.372 { 00:26:45.372 "name": "BaseBdev3", 00:26:45.372 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:45.372 "is_configured": true, 00:26:45.372 "data_offset": 0, 00:26:45.372 "data_size": 65536 00:26:45.372 } 00:26:45.372 ] 00:26:45.372 }' 00:26:45.372 13:10:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:45.372 13:10:49 -- common/autotest_common.sh@10 -- # set +x 00:26:46.309 13:10:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:46.309 13:10:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:46.567 [2024-04-17 13:10:50.474538] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:46.567 13:10:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:26:46.567 13:10:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.567 13:10:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:46.826 13:10:50 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:46.826 13:10:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:46.826 13:10:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:46.826 13:10:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@12 -- # local i 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:46.826 13:10:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:47.085 [2024-04-17 13:10:51.006622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:47.085 /dev/nbd0 00:26:47.085 13:10:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:47.085 13:10:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:47.085 13:10:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:26:47.085 13:10:51 -- common/autotest_common.sh@855 -- # local i 00:26:47.085 13:10:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:26:47.085 13:10:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:26:47.085 13:10:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:26:47.085 13:10:51 -- common/autotest_common.sh@859 -- # break 00:26:47.085 13:10:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:47.085 13:10:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:47.085 13:10:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:47.085 1+0 records in 00:26:47.085 1+0 records out 00:26:47.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251824 s, 16.3 MB/s 00:26:47.085 13:10:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:47.085 13:10:51 -- common/autotest_common.sh@872 -- # size=4096 00:26:47.085 13:10:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:47.085 13:10:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:26:47.085 13:10:51 -- common/autotest_common.sh@875 -- # return 0 00:26:47.085 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:47.085 13:10:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:47.085 13:10:51 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:47.085 13:10:51 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:26:47.085 13:10:51 -- bdev/bdev_raid.sh@582 -- # echo 128 00:26:47.085 13:10:51 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:26:47.652 512+0 records in 00:26:47.652 512+0 records out 00:26:47.652 67108864 bytes (67 MB, 64 MiB) copied, 0.450916 s, 149 MB/s 00:26:47.652 13:10:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@51 -- # local i 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:47.652 13:10:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:47.926 [2024-04-17 13:10:51.837280] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@41 -- # break 00:26:47.926 13:10:51 -- bdev/nbd_common.sh@45 -- # return 0 00:26:47.926 13:10:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:48.185 [2024-04-17 13:10:52.094947] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.185 13:10:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.443 13:10:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:48.443 "name": "raid_bdev1", 00:26:48.443 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:48.443 "strip_size_kb": 64, 00:26:48.443 "state": "online", 00:26:48.443 "raid_level": "raid5f", 00:26:48.443 "superblock": false, 00:26:48.443 "num_base_bdevs": 3, 00:26:48.443 "num_base_bdevs_discovered": 2, 00:26:48.443 "num_base_bdevs_operational": 2, 00:26:48.443 "base_bdevs_list": [ 00:26:48.443 { 00:26:48.443 "name": null, 00:26:48.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.443 "is_configured": false, 00:26:48.443 "data_offset": 0, 00:26:48.443 "data_size": 65536 00:26:48.443 }, 00:26:48.443 { 00:26:48.443 "name": "BaseBdev2", 00:26:48.443 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:48.443 "is_configured": true, 00:26:48.443 "data_offset": 0, 00:26:48.443 "data_size": 65536 00:26:48.443 }, 00:26:48.443 { 00:26:48.443 "name": "BaseBdev3", 00:26:48.443 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:48.443 "is_configured": true, 00:26:48.443 "data_offset": 0, 00:26:48.443 "data_size": 65536 00:26:48.443 } 00:26:48.443 ] 00:26:48.443 }' 00:26:48.443 13:10:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:48.443 13:10:52 -- common/autotest_common.sh@10 -- # set +x 00:26:49.010 13:10:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:49.269 [2024-04-17 13:10:53.327389] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:49.269 [2024-04-17 13:10:53.327444] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:49.269 [2024-04-17 13:10:53.342851] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:26:49.269 [2024-04-17 13:10:53.350201] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:49.269 13:10:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:50.646 "name": "raid_bdev1", 00:26:50.646 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:50.646 "strip_size_kb": 64, 00:26:50.646 "state": "online", 00:26:50.646 "raid_level": "raid5f", 00:26:50.646 "superblock": false, 00:26:50.646 "num_base_bdevs": 3, 00:26:50.646 "num_base_bdevs_discovered": 3, 00:26:50.646 "num_base_bdevs_operational": 3, 00:26:50.646 "process": { 00:26:50.646 "type": "rebuild", 00:26:50.646 "target": "spare", 00:26:50.646 "progress": { 00:26:50.646 "blocks": 24576, 00:26:50.646 "percent": 18 00:26:50.646 } 00:26:50.646 }, 00:26:50.646 "base_bdevs_list": [ 00:26:50.646 { 00:26:50.646 "name": "spare", 00:26:50.646 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:50.646 "is_configured": true, 00:26:50.646 "data_offset": 0, 00:26:50.646 "data_size": 65536 00:26:50.646 }, 00:26:50.646 { 00:26:50.646 "name": "BaseBdev2", 00:26:50.646 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:50.646 "is_configured": true, 00:26:50.646 "data_offset": 0, 00:26:50.646 "data_size": 65536 00:26:50.646 }, 00:26:50.646 { 00:26:50.646 "name": "BaseBdev3", 00:26:50.646 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:50.646 "is_configured": true, 00:26:50.646 "data_offset": 0, 00:26:50.646 "data_size": 65536 00:26:50.646 } 00:26:50.646 ] 00:26:50.646 }' 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.646 13:10:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:50.905 [2024-04-17 13:10:54.968246] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:51.163 [2024-04-17 13:10:55.067008] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:51.163 [2024-04-17 13:10:55.067145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.163 13:10:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.421 13:10:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.421 "name": "raid_bdev1", 00:26:51.421 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:51.421 "strip_size_kb": 64, 00:26:51.421 "state": "online", 00:26:51.421 "raid_level": "raid5f", 00:26:51.421 "superblock": false, 00:26:51.421 "num_base_bdevs": 3, 00:26:51.422 "num_base_bdevs_discovered": 2, 00:26:51.422 "num_base_bdevs_operational": 2, 00:26:51.422 "base_bdevs_list": [ 00:26:51.422 { 00:26:51.422 "name": null, 00:26:51.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.422 "is_configured": false, 00:26:51.422 "data_offset": 0, 00:26:51.422 "data_size": 65536 00:26:51.422 }, 00:26:51.422 { 00:26:51.422 "name": "BaseBdev2", 00:26:51.422 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:51.422 "is_configured": true, 00:26:51.422 "data_offset": 0, 00:26:51.422 "data_size": 65536 00:26:51.422 }, 00:26:51.422 { 00:26:51.422 "name": "BaseBdev3", 00:26:51.422 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:51.422 "is_configured": true, 00:26:51.422 "data_offset": 0, 00:26:51.422 "data_size": 65536 00:26:51.422 } 00:26:51.422 ] 00:26:51.422 }' 00:26:51.422 13:10:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.422 13:10:55 -- common/autotest_common.sh@10 -- # set +x 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.990 13:10:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.248 13:10:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:52.248 "name": "raid_bdev1", 00:26:52.248 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:52.248 "strip_size_kb": 64, 00:26:52.248 "state": "online", 00:26:52.248 "raid_level": "raid5f", 00:26:52.248 "superblock": false, 00:26:52.248 "num_base_bdevs": 3, 00:26:52.248 "num_base_bdevs_discovered": 2, 00:26:52.248 "num_base_bdevs_operational": 2, 00:26:52.248 "base_bdevs_list": [ 00:26:52.248 { 00:26:52.248 "name": null, 00:26:52.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.248 "is_configured": false, 00:26:52.248 "data_offset": 0, 00:26:52.248 "data_size": 65536 00:26:52.248 }, 00:26:52.248 { 00:26:52.248 "name": "BaseBdev2", 00:26:52.248 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:52.248 "is_configured": true, 00:26:52.248 "data_offset": 0, 00:26:52.248 "data_size": 65536 00:26:52.248 }, 00:26:52.248 { 00:26:52.248 "name": "BaseBdev3", 00:26:52.248 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:52.248 "is_configured": true, 00:26:52.248 "data_offset": 0, 00:26:52.248 "data_size": 65536 00:26:52.248 } 00:26:52.248 ] 00:26:52.248 }' 00:26:52.248 13:10:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:52.506 13:10:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.506 13:10:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:52.506 13:10:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:52.506 13:10:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:52.765 [2024-04-17 13:10:56.707491] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:52.765 [2024-04-17 13:10:56.707550] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.765 [2024-04-17 13:10:56.720436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:26:52.765 [2024-04-17 13:10:56.727103] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:52.765 13:10:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.701 13:10:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.959 13:10:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:53.959 "name": "raid_bdev1", 00:26:53.959 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:53.959 "strip_size_kb": 64, 00:26:53.959 "state": "online", 00:26:53.959 "raid_level": "raid5f", 00:26:53.959 "superblock": false, 00:26:53.959 "num_base_bdevs": 3, 00:26:53.959 "num_base_bdevs_discovered": 3, 00:26:53.959 "num_base_bdevs_operational": 3, 00:26:53.959 "process": { 00:26:53.959 "type": "rebuild", 00:26:53.959 "target": "spare", 00:26:53.959 "progress": { 00:26:53.959 "blocks": 24576, 00:26:53.959 "percent": 18 00:26:53.959 } 00:26:53.959 }, 00:26:53.959 "base_bdevs_list": [ 00:26:53.959 { 00:26:53.959 "name": "spare", 00:26:53.959 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:53.959 "is_configured": true, 00:26:53.959 "data_offset": 0, 00:26:53.959 "data_size": 65536 00:26:53.959 }, 00:26:53.959 { 00:26:53.959 "name": "BaseBdev2", 00:26:53.959 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:53.959 "is_configured": true, 00:26:53.959 "data_offset": 0, 00:26:53.959 "data_size": 65536 00:26:53.959 }, 00:26:53.959 { 00:26:53.959 "name": "BaseBdev3", 00:26:53.959 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:53.959 "is_configured": true, 00:26:53.959 "data_offset": 0, 00:26:53.959 "data_size": 65536 00:26:53.959 } 00:26:53.959 ] 00:26:53.959 }' 00:26:53.959 13:10:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@657 -- # local timeout=679 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.218 13:10:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:54.477 "name": "raid_bdev1", 00:26:54.477 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:54.477 "strip_size_kb": 64, 00:26:54.477 "state": "online", 00:26:54.477 "raid_level": "raid5f", 00:26:54.477 "superblock": false, 00:26:54.477 "num_base_bdevs": 3, 00:26:54.477 "num_base_bdevs_discovered": 3, 00:26:54.477 "num_base_bdevs_operational": 3, 00:26:54.477 "process": { 00:26:54.477 "type": "rebuild", 00:26:54.477 "target": "spare", 00:26:54.477 "progress": { 00:26:54.477 "blocks": 34816, 00:26:54.477 "percent": 26 00:26:54.477 } 00:26:54.477 }, 00:26:54.477 "base_bdevs_list": [ 00:26:54.477 { 00:26:54.477 "name": "spare", 00:26:54.477 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:54.477 "is_configured": true, 00:26:54.477 "data_offset": 0, 00:26:54.477 "data_size": 65536 00:26:54.477 }, 00:26:54.477 { 00:26:54.477 "name": "BaseBdev2", 00:26:54.477 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:54.477 "is_configured": true, 00:26:54.477 "data_offset": 0, 00:26:54.477 "data_size": 65536 00:26:54.477 }, 00:26:54.477 { 00:26:54.477 "name": "BaseBdev3", 00:26:54.477 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:54.477 "is_configured": true, 00:26:54.477 "data_offset": 0, 00:26:54.477 "data_size": 65536 00:26:54.477 } 00:26:54.477 ] 00:26:54.477 }' 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.477 13:10:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:55.854 "name": "raid_bdev1", 00:26:55.854 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:55.854 "strip_size_kb": 64, 00:26:55.854 "state": "online", 00:26:55.854 "raid_level": "raid5f", 00:26:55.854 "superblock": false, 00:26:55.854 "num_base_bdevs": 3, 00:26:55.854 "num_base_bdevs_discovered": 3, 00:26:55.854 "num_base_bdevs_operational": 3, 00:26:55.854 "process": { 00:26:55.854 "type": "rebuild", 00:26:55.854 "target": "spare", 00:26:55.854 "progress": { 00:26:55.854 "blocks": 61440, 00:26:55.854 "percent": 46 00:26:55.854 } 00:26:55.854 }, 00:26:55.854 "base_bdevs_list": [ 00:26:55.854 { 00:26:55.854 "name": "spare", 00:26:55.854 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:55.854 "is_configured": true, 00:26:55.854 "data_offset": 0, 00:26:55.854 "data_size": 65536 00:26:55.854 }, 00:26:55.854 { 00:26:55.854 "name": "BaseBdev2", 00:26:55.854 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:55.854 "is_configured": true, 00:26:55.854 "data_offset": 0, 00:26:55.854 "data_size": 65536 00:26:55.854 }, 00:26:55.854 { 00:26:55.854 "name": "BaseBdev3", 00:26:55.854 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:55.854 "is_configured": true, 00:26:55.854 "data_offset": 0, 00:26:55.854 "data_size": 65536 00:26:55.854 } 00:26:55.854 ] 00:26:55.854 }' 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.854 13:10:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.230 13:11:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.230 13:11:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:57.230 "name": "raid_bdev1", 00:26:57.230 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:57.230 "strip_size_kb": 64, 00:26:57.230 "state": "online", 00:26:57.230 "raid_level": "raid5f", 00:26:57.230 "superblock": false, 00:26:57.230 "num_base_bdevs": 3, 00:26:57.230 "num_base_bdevs_discovered": 3, 00:26:57.230 "num_base_bdevs_operational": 3, 00:26:57.230 "process": { 00:26:57.230 "type": "rebuild", 00:26:57.230 "target": "spare", 00:26:57.230 "progress": { 00:26:57.230 "blocks": 90112, 00:26:57.230 "percent": 68 00:26:57.230 } 00:26:57.230 }, 00:26:57.230 "base_bdevs_list": [ 00:26:57.230 { 00:26:57.230 "name": "spare", 00:26:57.230 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:57.230 "is_configured": true, 00:26:57.230 "data_offset": 0, 00:26:57.230 "data_size": 65536 00:26:57.230 }, 00:26:57.230 { 00:26:57.230 "name": "BaseBdev2", 00:26:57.230 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:57.230 "is_configured": true, 00:26:57.230 "data_offset": 0, 00:26:57.230 "data_size": 65536 00:26:57.230 }, 00:26:57.230 { 00:26:57.230 "name": "BaseBdev3", 00:26:57.230 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:57.230 "is_configured": true, 00:26:57.230 "data_offset": 0, 00:26:57.230 "data_size": 65536 00:26:57.230 } 00:26:57.230 ] 00:26:57.230 }' 00:26:57.230 13:11:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:57.230 13:11:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:57.230 13:11:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:57.488 13:11:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.488 13:11:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.422 13:11:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:58.680 "name": "raid_bdev1", 00:26:58.680 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:26:58.680 "strip_size_kb": 64, 00:26:58.680 "state": "online", 00:26:58.680 "raid_level": "raid5f", 00:26:58.680 "superblock": false, 00:26:58.680 "num_base_bdevs": 3, 00:26:58.680 "num_base_bdevs_discovered": 3, 00:26:58.680 "num_base_bdevs_operational": 3, 00:26:58.680 "process": { 00:26:58.680 "type": "rebuild", 00:26:58.680 "target": "spare", 00:26:58.680 "progress": { 00:26:58.680 "blocks": 118784, 00:26:58.680 "percent": 90 00:26:58.680 } 00:26:58.680 }, 00:26:58.680 "base_bdevs_list": [ 00:26:58.680 { 00:26:58.680 "name": "spare", 00:26:58.680 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:26:58.680 "is_configured": true, 00:26:58.680 "data_offset": 0, 00:26:58.680 "data_size": 65536 00:26:58.680 }, 00:26:58.680 { 00:26:58.680 "name": "BaseBdev2", 00:26:58.680 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:26:58.680 "is_configured": true, 00:26:58.680 "data_offset": 0, 00:26:58.680 "data_size": 65536 00:26:58.680 }, 00:26:58.680 { 00:26:58.680 "name": "BaseBdev3", 00:26:58.680 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:26:58.680 "is_configured": true, 00:26:58.680 "data_offset": 0, 00:26:58.680 "data_size": 65536 00:26:58.680 } 00:26:58.680 ] 00:26:58.680 }' 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.680 13:11:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:59.247 [2024-04-17 13:11:03.189873] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:59.247 [2024-04-17 13:11:03.189975] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:59.247 [2024-04-17 13:11:03.190062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.813 13:11:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:00.072 "name": "raid_bdev1", 00:27:00.072 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:27:00.072 "strip_size_kb": 64, 00:27:00.072 "state": "online", 00:27:00.072 "raid_level": "raid5f", 00:27:00.072 "superblock": false, 00:27:00.072 "num_base_bdevs": 3, 00:27:00.072 "num_base_bdevs_discovered": 3, 00:27:00.072 "num_base_bdevs_operational": 3, 00:27:00.072 "base_bdevs_list": [ 00:27:00.072 { 00:27:00.072 "name": "spare", 00:27:00.072 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:27:00.072 "is_configured": true, 00:27:00.072 "data_offset": 0, 00:27:00.072 "data_size": 65536 00:27:00.072 }, 00:27:00.072 { 00:27:00.072 "name": "BaseBdev2", 00:27:00.072 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:27:00.072 "is_configured": true, 00:27:00.072 "data_offset": 0, 00:27:00.072 "data_size": 65536 00:27:00.072 }, 00:27:00.072 { 00:27:00.072 "name": "BaseBdev3", 00:27:00.072 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:27:00.072 "is_configured": true, 00:27:00.072 "data_offset": 0, 00:27:00.072 "data_size": 65536 00:27:00.072 } 00:27:00.072 ] 00:27:00.072 }' 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@660 -- # break 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.072 13:11:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.329 13:11:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:00.329 "name": "raid_bdev1", 00:27:00.329 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:27:00.329 "strip_size_kb": 64, 00:27:00.329 "state": "online", 00:27:00.329 "raid_level": "raid5f", 00:27:00.329 "superblock": false, 00:27:00.329 "num_base_bdevs": 3, 00:27:00.329 "num_base_bdevs_discovered": 3, 00:27:00.329 "num_base_bdevs_operational": 3, 00:27:00.329 "base_bdevs_list": [ 00:27:00.329 { 00:27:00.329 "name": "spare", 00:27:00.329 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:27:00.329 "is_configured": true, 00:27:00.329 "data_offset": 0, 00:27:00.329 "data_size": 65536 00:27:00.329 }, 00:27:00.329 { 00:27:00.329 "name": "BaseBdev2", 00:27:00.329 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:27:00.329 "is_configured": true, 00:27:00.329 "data_offset": 0, 00:27:00.329 "data_size": 65536 00:27:00.329 }, 00:27:00.329 { 00:27:00.329 "name": "BaseBdev3", 00:27:00.329 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:27:00.329 "is_configured": true, 00:27:00.329 "data_offset": 0, 00:27:00.329 "data_size": 65536 00:27:00.329 } 00:27:00.329 ] 00:27:00.329 }' 00:27:00.329 13:11:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:00.329 13:11:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:00.329 13:11:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.590 13:11:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.847 13:11:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:00.847 "name": "raid_bdev1", 00:27:00.847 "uuid": "83fe41ec-ad8b-46b1-8b81-0a50029007f7", 00:27:00.847 "strip_size_kb": 64, 00:27:00.847 "state": "online", 00:27:00.847 "raid_level": "raid5f", 00:27:00.847 "superblock": false, 00:27:00.847 "num_base_bdevs": 3, 00:27:00.847 "num_base_bdevs_discovered": 3, 00:27:00.847 "num_base_bdevs_operational": 3, 00:27:00.847 "base_bdevs_list": [ 00:27:00.847 { 00:27:00.847 "name": "spare", 00:27:00.847 "uuid": "0852b9a9-e1e7-56d4-82b3-fe633ef5277f", 00:27:00.847 "is_configured": true, 00:27:00.847 "data_offset": 0, 00:27:00.847 "data_size": 65536 00:27:00.847 }, 00:27:00.847 { 00:27:00.847 "name": "BaseBdev2", 00:27:00.847 "uuid": "97dd4ea8-62bc-416d-a02a-a58e79810abf", 00:27:00.847 "is_configured": true, 00:27:00.847 "data_offset": 0, 00:27:00.847 "data_size": 65536 00:27:00.847 }, 00:27:00.847 { 00:27:00.847 "name": "BaseBdev3", 00:27:00.847 "uuid": "13df9b9e-6a0e-4c27-adc5-0da3f29eff3a", 00:27:00.847 "is_configured": true, 00:27:00.847 "data_offset": 0, 00:27:00.847 "data_size": 65536 00:27:00.847 } 00:27:00.847 ] 00:27:00.847 }' 00:27:00.847 13:11:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:00.847 13:11:04 -- common/autotest_common.sh@10 -- # set +x 00:27:01.413 13:11:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:01.671 [2024-04-17 13:11:05.746576] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.671 [2024-04-17 13:11:05.746621] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:01.671 [2024-04-17 13:11:05.746730] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.671 [2024-04-17 13:11:05.746807] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.671 [2024-04-17 13:11:05.746819] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:27:01.671 13:11:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.671 13:11:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:01.929 13:11:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:01.929 13:11:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:01.929 13:11:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@12 -- # local i 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:01.929 13:11:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:02.187 /dev/nbd0 00:27:02.187 13:11:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:02.187 13:11:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:02.187 13:11:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:02.187 13:11:06 -- common/autotest_common.sh@855 -- # local i 00:27:02.187 13:11:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:02.187 13:11:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:02.187 13:11:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:02.187 13:11:06 -- common/autotest_common.sh@859 -- # break 00:27:02.187 13:11:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:02.187 13:11:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:02.187 13:11:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:02.187 1+0 records in 00:27:02.187 1+0 records out 00:27:02.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239634 s, 17.1 MB/s 00:27:02.187 13:11:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.187 13:11:06 -- common/autotest_common.sh@872 -- # size=4096 00:27:02.187 13:11:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.187 13:11:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:02.187 13:11:06 -- common/autotest_common.sh@875 -- # return 0 00:27:02.187 13:11:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:02.187 13:11:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.187 13:11:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:02.444 /dev/nbd1 00:27:02.444 13:11:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:02.444 13:11:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:02.444 13:11:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:02.444 13:11:06 -- common/autotest_common.sh@855 -- # local i 00:27:02.444 13:11:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:02.444 13:11:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:02.444 13:11:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:02.444 13:11:06 -- common/autotest_common.sh@859 -- # break 00:27:02.444 13:11:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:02.444 13:11:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:02.444 13:11:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:02.444 1+0 records in 00:27:02.444 1+0 records out 00:27:02.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049026 s, 8.4 MB/s 00:27:02.444 13:11:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.444 13:11:06 -- common/autotest_common.sh@872 -- # size=4096 00:27:02.444 13:11:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:02.444 13:11:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:02.444 13:11:06 -- common/autotest_common.sh@875 -- # return 0 00:27:02.445 13:11:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:02.445 13:11:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.445 13:11:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:02.702 13:11:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@51 -- # local i 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:02.703 13:11:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:02.961 13:11:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@41 -- # break 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.218 13:11:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@41 -- # break 00:27:03.476 13:11:07 -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.476 13:11:07 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:27:03.476 13:11:07 -- bdev/bdev_raid.sh@709 -- # killprocess 137146 00:27:03.476 13:11:07 -- common/autotest_common.sh@924 -- # '[' -z 137146 ']' 00:27:03.476 13:11:07 -- common/autotest_common.sh@928 -- # kill -0 137146 00:27:03.476 13:11:07 -- common/autotest_common.sh@929 -- # uname 00:27:03.477 13:11:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:03.477 13:11:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 137146 00:27:03.477 13:11:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:03.477 13:11:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:03.477 13:11:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 137146' 00:27:03.477 killing process with pid 137146 00:27:03.477 13:11:07 -- common/autotest_common.sh@943 -- # kill 137146 00:27:03.477 Received shutdown signal, test time was about 60.000000 seconds 00:27:03.477 00:27:03.477 Latency(us) 00:27:03.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.477 =================================================================================================================== 00:27:03.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:03.477 13:11:07 -- common/autotest_common.sh@948 -- # wait 137146 00:27:03.477 [2024-04-17 13:11:07.534720] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:03.735 [2024-04-17 13:11:07.865504] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:05.111 ************************************ 00:27:05.111 END TEST raid5f_rebuild_test 00:27:05.111 ************************************ 00:27:05.111 13:11:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:05.111 00:27:05.111 real 0m22.837s 00:27:05.111 user 0m34.815s 00:27:05.111 sys 0m2.736s 00:27:05.111 13:11:08 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:27:05.111 13:11:08 -- common/autotest_common.sh@10 -- # set +x 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:27:05.111 13:11:09 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:27:05.111 13:11:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:05.111 13:11:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.111 ************************************ 00:27:05.111 START TEST raid5f_rebuild_test_sb 00:27:05.111 ************************************ 00:27:05.111 13:11:09 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid5f 3 true false 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=137762 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137762 /var/tmp/spdk-raid.sock 00:27:05.111 13:11:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:05.111 13:11:09 -- common/autotest_common.sh@817 -- # '[' -z 137762 ']' 00:27:05.111 13:11:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:05.111 13:11:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:05.111 13:11:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:05.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:05.111 13:11:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:05.111 13:11:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.111 [2024-04-17 13:11:09.103950] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:27:05.111 [2024-04-17 13:11:09.104329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137762 ] 00:27:05.111 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:05.111 Zero copy mechanism will not be used. 00:27:05.370 [2024-04-17 13:11:09.257744] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.370 [2024-04-17 13:11:09.470017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.628 [2024-04-17 13:11:09.667889] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.193 13:11:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:06.193 13:11:10 -- common/autotest_common.sh@850 -- # return 0 00:27:06.193 13:11:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:06.193 13:11:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:06.193 13:11:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:06.451 BaseBdev1_malloc 00:27:06.451 13:11:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:06.708 [2024-04-17 13:11:10.643025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:06.708 [2024-04-17 13:11:10.643145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.708 [2024-04-17 13:11:10.643186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:06.708 [2024-04-17 13:11:10.643234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.708 [2024-04-17 13:11:10.645834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.708 [2024-04-17 13:11:10.645891] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:06.708 BaseBdev1 00:27:06.708 13:11:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:06.708 13:11:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:06.708 13:11:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:06.965 BaseBdev2_malloc 00:27:06.965 13:11:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:07.223 [2024-04-17 13:11:11.198384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:07.223 [2024-04-17 13:11:11.198494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.223 [2024-04-17 13:11:11.198545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:07.223 [2024-04-17 13:11:11.198600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.223 [2024-04-17 13:11:11.201147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.223 [2024-04-17 13:11:11.201206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:07.223 BaseBdev2 00:27:07.223 13:11:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:07.223 13:11:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:07.223 13:11:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:07.481 BaseBdev3_malloc 00:27:07.481 13:11:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:07.741 [2024-04-17 13:11:11.693437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:07.741 [2024-04-17 13:11:11.693541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.741 [2024-04-17 13:11:11.693588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:07.741 [2024-04-17 13:11:11.693634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.741 [2024-04-17 13:11:11.696174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.741 [2024-04-17 13:11:11.696238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:07.741 BaseBdev3 00:27:07.741 13:11:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:08.006 spare_malloc 00:27:08.006 13:11:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:08.265 spare_delay 00:27:08.265 13:11:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:08.524 [2024-04-17 13:11:12.484569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:08.524 [2024-04-17 13:11:12.484684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.524 [2024-04-17 13:11:12.484726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:08.524 [2024-04-17 13:11:12.484779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.524 [2024-04-17 13:11:12.487350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.524 [2024-04-17 13:11:12.487412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:08.524 spare 00:27:08.524 13:11:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:27:08.782 [2024-04-17 13:11:12.716730] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:08.782 [2024-04-17 13:11:12.718901] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:08.782 [2024-04-17 13:11:12.718993] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:08.782 [2024-04-17 13:11:12.719233] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:27:08.782 [2024-04-17 13:11:12.719260] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:08.782 [2024-04-17 13:11:12.719399] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:27:08.782 [2024-04-17 13:11:12.724638] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:27:08.782 [2024-04-17 13:11:12.724671] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:27:08.782 [2024-04-17 13:11:12.724888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.782 13:11:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.042 13:11:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:09.042 "name": "raid_bdev1", 00:27:09.042 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:09.042 "strip_size_kb": 64, 00:27:09.042 "state": "online", 00:27:09.042 "raid_level": "raid5f", 00:27:09.042 "superblock": true, 00:27:09.042 "num_base_bdevs": 3, 00:27:09.042 "num_base_bdevs_discovered": 3, 00:27:09.042 "num_base_bdevs_operational": 3, 00:27:09.042 "base_bdevs_list": [ 00:27:09.042 { 00:27:09.042 "name": "BaseBdev1", 00:27:09.042 "uuid": "eadac65e-ee5f-557c-b266-7093e65f5713", 00:27:09.042 "is_configured": true, 00:27:09.042 "data_offset": 2048, 00:27:09.042 "data_size": 63488 00:27:09.042 }, 00:27:09.042 { 00:27:09.042 "name": "BaseBdev2", 00:27:09.042 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:09.042 "is_configured": true, 00:27:09.042 "data_offset": 2048, 00:27:09.042 "data_size": 63488 00:27:09.042 }, 00:27:09.042 { 00:27:09.042 "name": "BaseBdev3", 00:27:09.042 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:09.042 "is_configured": true, 00:27:09.042 "data_offset": 2048, 00:27:09.042 "data_size": 63488 00:27:09.042 } 00:27:09.042 ] 00:27:09.042 }' 00:27:09.042 13:11:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:09.042 13:11:13 -- common/autotest_common.sh@10 -- # set +x 00:27:09.618 13:11:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.618 13:11:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:09.888 [2024-04-17 13:11:13.946847] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.888 13:11:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:27:09.888 13:11:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.888 13:11:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:10.159 13:11:14 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:27:10.159 13:11:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:10.159 13:11:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:10.159 13:11:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@12 -- # local i 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:10.159 13:11:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:10.429 [2024-04-17 13:11:14.430836] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:27:10.429 /dev/nbd0 00:27:10.429 13:11:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:10.429 13:11:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:10.429 13:11:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:10.429 13:11:14 -- common/autotest_common.sh@855 -- # local i 00:27:10.429 13:11:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:10.429 13:11:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:10.429 13:11:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:10.429 13:11:14 -- common/autotest_common.sh@859 -- # break 00:27:10.429 13:11:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:10.429 13:11:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:10.429 13:11:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:10.429 1+0 records in 00:27:10.429 1+0 records out 00:27:10.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292446 s, 14.0 MB/s 00:27:10.429 13:11:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.429 13:11:14 -- common/autotest_common.sh@872 -- # size=4096 00:27:10.429 13:11:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:10.429 13:11:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:10.429 13:11:14 -- common/autotest_common.sh@875 -- # return 0 00:27:10.429 13:11:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:10.429 13:11:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:10.429 13:11:14 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:10.429 13:11:14 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:27:10.429 13:11:14 -- bdev/bdev_raid.sh@582 -- # echo 128 00:27:10.429 13:11:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:11.013 496+0 records in 00:27:11.013 496+0 records out 00:27:11.013 65011712 bytes (65 MB, 62 MiB) copied, 0.416283 s, 156 MB/s 00:27:11.013 13:11:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@51 -- # local i 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:11.013 13:11:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:11.271 [2024-04-17 13:11:15.204007] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@41 -- # break 00:27:11.271 13:11:15 -- bdev/nbd_common.sh@45 -- # return 0 00:27:11.271 13:11:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:11.529 [2024-04-17 13:11:15.525671] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.529 13:11:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.811 13:11:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:11.811 "name": "raid_bdev1", 00:27:11.811 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:11.811 "strip_size_kb": 64, 00:27:11.811 "state": "online", 00:27:11.811 "raid_level": "raid5f", 00:27:11.811 "superblock": true, 00:27:11.811 "num_base_bdevs": 3, 00:27:11.811 "num_base_bdevs_discovered": 2, 00:27:11.811 "num_base_bdevs_operational": 2, 00:27:11.811 "base_bdevs_list": [ 00:27:11.811 { 00:27:11.811 "name": null, 00:27:11.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.811 "is_configured": false, 00:27:11.811 "data_offset": 2048, 00:27:11.811 "data_size": 63488 00:27:11.811 }, 00:27:11.811 { 00:27:11.811 "name": "BaseBdev2", 00:27:11.811 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:11.811 "is_configured": true, 00:27:11.811 "data_offset": 2048, 00:27:11.811 "data_size": 63488 00:27:11.811 }, 00:27:11.811 { 00:27:11.811 "name": "BaseBdev3", 00:27:11.811 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:11.811 "is_configured": true, 00:27:11.811 "data_offset": 2048, 00:27:11.811 "data_size": 63488 00:27:11.811 } 00:27:11.811 ] 00:27:11.811 }' 00:27:11.811 13:11:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:11.811 13:11:15 -- common/autotest_common.sh@10 -- # set +x 00:27:12.764 13:11:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:12.764 [2024-04-17 13:11:16.757962] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:12.764 [2024-04-17 13:11:16.758034] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:12.764 [2024-04-17 13:11:16.771764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:27:12.764 [2024-04-17 13:11:16.778753] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:12.764 13:11:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.698 13:11:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.955 13:11:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:13.955 "name": "raid_bdev1", 00:27:13.955 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:13.955 "strip_size_kb": 64, 00:27:13.955 "state": "online", 00:27:13.955 "raid_level": "raid5f", 00:27:13.955 "superblock": true, 00:27:13.955 "num_base_bdevs": 3, 00:27:13.955 "num_base_bdevs_discovered": 3, 00:27:13.955 "num_base_bdevs_operational": 3, 00:27:13.955 "process": { 00:27:13.955 "type": "rebuild", 00:27:13.955 "target": "spare", 00:27:13.956 "progress": { 00:27:13.956 "blocks": 24576, 00:27:13.956 "percent": 19 00:27:13.956 } 00:27:13.956 }, 00:27:13.956 "base_bdevs_list": [ 00:27:13.956 { 00:27:13.956 "name": "spare", 00:27:13.956 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:13.956 "is_configured": true, 00:27:13.956 "data_offset": 2048, 00:27:13.956 "data_size": 63488 00:27:13.956 }, 00:27:13.956 { 00:27:13.956 "name": "BaseBdev2", 00:27:13.956 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:13.956 "is_configured": true, 00:27:13.956 "data_offset": 2048, 00:27:13.956 "data_size": 63488 00:27:13.956 }, 00:27:13.956 { 00:27:13.956 "name": "BaseBdev3", 00:27:13.956 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:13.956 "is_configured": true, 00:27:13.956 "data_offset": 2048, 00:27:13.956 "data_size": 63488 00:27:13.956 } 00:27:13.956 ] 00:27:13.956 }' 00:27:13.956 13:11:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:13.956 13:11:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.213 13:11:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:14.213 13:11:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.213 13:11:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:14.471 [2024-04-17 13:11:18.404792] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:14.471 [2024-04-17 13:11:18.497883] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:14.471 [2024-04-17 13:11:18.497988] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.471 13:11:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.728 13:11:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:14.728 "name": "raid_bdev1", 00:27:14.728 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:14.728 "strip_size_kb": 64, 00:27:14.728 "state": "online", 00:27:14.728 "raid_level": "raid5f", 00:27:14.728 "superblock": true, 00:27:14.728 "num_base_bdevs": 3, 00:27:14.728 "num_base_bdevs_discovered": 2, 00:27:14.728 "num_base_bdevs_operational": 2, 00:27:14.728 "base_bdevs_list": [ 00:27:14.728 { 00:27:14.728 "name": null, 00:27:14.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.728 "is_configured": false, 00:27:14.728 "data_offset": 2048, 00:27:14.728 "data_size": 63488 00:27:14.728 }, 00:27:14.728 { 00:27:14.728 "name": "BaseBdev2", 00:27:14.728 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:14.728 "is_configured": true, 00:27:14.728 "data_offset": 2048, 00:27:14.728 "data_size": 63488 00:27:14.728 }, 00:27:14.728 { 00:27:14.728 "name": "BaseBdev3", 00:27:14.728 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:14.728 "is_configured": true, 00:27:14.728 "data_offset": 2048, 00:27:14.728 "data_size": 63488 00:27:14.728 } 00:27:14.728 ] 00:27:14.728 }' 00:27:14.728 13:11:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:14.728 13:11:18 -- common/autotest_common.sh@10 -- # set +x 00:27:15.660 13:11:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.660 13:11:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:15.660 13:11:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:15.661 "name": "raid_bdev1", 00:27:15.661 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:15.661 "strip_size_kb": 64, 00:27:15.661 "state": "online", 00:27:15.661 "raid_level": "raid5f", 00:27:15.661 "superblock": true, 00:27:15.661 "num_base_bdevs": 3, 00:27:15.661 "num_base_bdevs_discovered": 2, 00:27:15.661 "num_base_bdevs_operational": 2, 00:27:15.661 "base_bdevs_list": [ 00:27:15.661 { 00:27:15.661 "name": null, 00:27:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.661 "is_configured": false, 00:27:15.661 "data_offset": 2048, 00:27:15.661 "data_size": 63488 00:27:15.661 }, 00:27:15.661 { 00:27:15.661 "name": "BaseBdev2", 00:27:15.661 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:15.661 "is_configured": true, 00:27:15.661 "data_offset": 2048, 00:27:15.661 "data_size": 63488 00:27:15.661 }, 00:27:15.661 { 00:27:15.661 "name": "BaseBdev3", 00:27:15.661 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:15.661 "is_configured": true, 00:27:15.661 "data_offset": 2048, 00:27:15.661 "data_size": 63488 00:27:15.661 } 00:27:15.661 ] 00:27:15.661 }' 00:27:15.661 13:11:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:15.918 13:11:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:15.918 13:11:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:15.918 13:11:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:15.919 13:11:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:16.176 [2024-04-17 13:11:20.162660] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:16.176 [2024-04-17 13:11:20.162717] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:16.176 [2024-04-17 13:11:20.175726] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:27:16.176 [2024-04-17 13:11:20.182549] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:16.176 13:11:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.109 13:11:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.368 13:11:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:17.368 "name": "raid_bdev1", 00:27:17.368 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:17.368 "strip_size_kb": 64, 00:27:17.368 "state": "online", 00:27:17.368 "raid_level": "raid5f", 00:27:17.368 "superblock": true, 00:27:17.368 "num_base_bdevs": 3, 00:27:17.368 "num_base_bdevs_discovered": 3, 00:27:17.368 "num_base_bdevs_operational": 3, 00:27:17.368 "process": { 00:27:17.368 "type": "rebuild", 00:27:17.368 "target": "spare", 00:27:17.368 "progress": { 00:27:17.368 "blocks": 24576, 00:27:17.368 "percent": 19 00:27:17.368 } 00:27:17.368 }, 00:27:17.368 "base_bdevs_list": [ 00:27:17.368 { 00:27:17.368 "name": "spare", 00:27:17.368 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:17.368 "is_configured": true, 00:27:17.368 "data_offset": 2048, 00:27:17.368 "data_size": 63488 00:27:17.368 }, 00:27:17.368 { 00:27:17.368 "name": "BaseBdev2", 00:27:17.368 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:17.368 "is_configured": true, 00:27:17.368 "data_offset": 2048, 00:27:17.368 "data_size": 63488 00:27:17.368 }, 00:27:17.368 { 00:27:17.368 "name": "BaseBdev3", 00:27:17.368 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:17.368 "is_configured": true, 00:27:17.368 "data_offset": 2048, 00:27:17.368 "data_size": 63488 00:27:17.368 } 00:27:17.368 ] 00:27:17.368 }' 00:27:17.368 13:11:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:27:17.626 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@657 -- # local timeout=702 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:17.626 13:11:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:17.627 13:11:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.627 13:11:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:17.885 "name": "raid_bdev1", 00:27:17.885 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:17.885 "strip_size_kb": 64, 00:27:17.885 "state": "online", 00:27:17.885 "raid_level": "raid5f", 00:27:17.885 "superblock": true, 00:27:17.885 "num_base_bdevs": 3, 00:27:17.885 "num_base_bdevs_discovered": 3, 00:27:17.885 "num_base_bdevs_operational": 3, 00:27:17.885 "process": { 00:27:17.885 "type": "rebuild", 00:27:17.885 "target": "spare", 00:27:17.885 "progress": { 00:27:17.885 "blocks": 32768, 00:27:17.885 "percent": 25 00:27:17.885 } 00:27:17.885 }, 00:27:17.885 "base_bdevs_list": [ 00:27:17.885 { 00:27:17.885 "name": "spare", 00:27:17.885 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:17.885 "is_configured": true, 00:27:17.885 "data_offset": 2048, 00:27:17.885 "data_size": 63488 00:27:17.885 }, 00:27:17.885 { 00:27:17.885 "name": "BaseBdev2", 00:27:17.885 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:17.885 "is_configured": true, 00:27:17.885 "data_offset": 2048, 00:27:17.885 "data_size": 63488 00:27:17.885 }, 00:27:17.885 { 00:27:17.885 "name": "BaseBdev3", 00:27:17.885 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:17.885 "is_configured": true, 00:27:17.885 "data_offset": 2048, 00:27:17.885 "data_size": 63488 00:27:17.885 } 00:27:17.885 ] 00:27:17.885 }' 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.885 13:11:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.258 13:11:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:19.258 "name": "raid_bdev1", 00:27:19.258 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:19.258 "strip_size_kb": 64, 00:27:19.258 "state": "online", 00:27:19.258 "raid_level": "raid5f", 00:27:19.258 "superblock": true, 00:27:19.258 "num_base_bdevs": 3, 00:27:19.258 "num_base_bdevs_discovered": 3, 00:27:19.258 "num_base_bdevs_operational": 3, 00:27:19.258 "process": { 00:27:19.258 "type": "rebuild", 00:27:19.258 "target": "spare", 00:27:19.258 "progress": { 00:27:19.258 "blocks": 61440, 00:27:19.258 "percent": 48 00:27:19.258 } 00:27:19.258 }, 00:27:19.258 "base_bdevs_list": [ 00:27:19.258 { 00:27:19.258 "name": "spare", 00:27:19.258 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:19.258 "is_configured": true, 00:27:19.258 "data_offset": 2048, 00:27:19.258 "data_size": 63488 00:27:19.258 }, 00:27:19.258 { 00:27:19.258 "name": "BaseBdev2", 00:27:19.258 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:19.258 "is_configured": true, 00:27:19.258 "data_offset": 2048, 00:27:19.258 "data_size": 63488 00:27:19.258 }, 00:27:19.258 { 00:27:19.258 "name": "BaseBdev3", 00:27:19.258 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:19.258 "is_configured": true, 00:27:19.258 "data_offset": 2048, 00:27:19.258 "data_size": 63488 00:27:19.258 } 00:27:19.258 ] 00:27:19.258 }' 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.258 13:11:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.240 13:11:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.504 13:11:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:20.504 "name": "raid_bdev1", 00:27:20.504 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:20.504 "strip_size_kb": 64, 00:27:20.504 "state": "online", 00:27:20.504 "raid_level": "raid5f", 00:27:20.504 "superblock": true, 00:27:20.504 "num_base_bdevs": 3, 00:27:20.504 "num_base_bdevs_discovered": 3, 00:27:20.504 "num_base_bdevs_operational": 3, 00:27:20.504 "process": { 00:27:20.504 "type": "rebuild", 00:27:20.504 "target": "spare", 00:27:20.504 "progress": { 00:27:20.504 "blocks": 88064, 00:27:20.504 "percent": 69 00:27:20.504 } 00:27:20.504 }, 00:27:20.504 "base_bdevs_list": [ 00:27:20.504 { 00:27:20.504 "name": "spare", 00:27:20.504 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:20.504 "is_configured": true, 00:27:20.504 "data_offset": 2048, 00:27:20.504 "data_size": 63488 00:27:20.504 }, 00:27:20.504 { 00:27:20.504 "name": "BaseBdev2", 00:27:20.504 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:20.504 "is_configured": true, 00:27:20.504 "data_offset": 2048, 00:27:20.504 "data_size": 63488 00:27:20.504 }, 00:27:20.504 { 00:27:20.504 "name": "BaseBdev3", 00:27:20.504 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:20.504 "is_configured": true, 00:27:20.504 "data_offset": 2048, 00:27:20.504 "data_size": 63488 00:27:20.504 } 00:27:20.504 ] 00:27:20.504 }' 00:27:20.504 13:11:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:20.762 13:11:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:20.762 13:11:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:20.762 13:11:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.762 13:11:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.696 13:11:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.953 13:11:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:21.953 "name": "raid_bdev1", 00:27:21.953 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:21.953 "strip_size_kb": 64, 00:27:21.953 "state": "online", 00:27:21.953 "raid_level": "raid5f", 00:27:21.953 "superblock": true, 00:27:21.953 "num_base_bdevs": 3, 00:27:21.953 "num_base_bdevs_discovered": 3, 00:27:21.953 "num_base_bdevs_operational": 3, 00:27:21.953 "process": { 00:27:21.953 "type": "rebuild", 00:27:21.953 "target": "spare", 00:27:21.953 "progress": { 00:27:21.953 "blocks": 116736, 00:27:21.953 "percent": 91 00:27:21.953 } 00:27:21.953 }, 00:27:21.953 "base_bdevs_list": [ 00:27:21.953 { 00:27:21.953 "name": "spare", 00:27:21.953 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:21.953 "is_configured": true, 00:27:21.953 "data_offset": 2048, 00:27:21.953 "data_size": 63488 00:27:21.953 }, 00:27:21.953 { 00:27:21.953 "name": "BaseBdev2", 00:27:21.953 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:21.953 "is_configured": true, 00:27:21.953 "data_offset": 2048, 00:27:21.953 "data_size": 63488 00:27:21.953 }, 00:27:21.953 { 00:27:21.953 "name": "BaseBdev3", 00:27:21.953 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:21.953 "is_configured": true, 00:27:21.953 "data_offset": 2048, 00:27:21.953 "data_size": 63488 00:27:21.953 } 00:27:21.953 ] 00:27:21.953 }' 00:27:21.953 13:11:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:21.953 13:11:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.953 13:11:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:22.211 13:11:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.211 13:11:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:22.468 [2024-04-17 13:11:26.452981] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:22.468 [2024-04-17 13:11:26.453067] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:22.468 [2024-04-17 13:11:26.453238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.035 13:11:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.294 13:11:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:23.294 "name": "raid_bdev1", 00:27:23.294 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:23.294 "strip_size_kb": 64, 00:27:23.294 "state": "online", 00:27:23.294 "raid_level": "raid5f", 00:27:23.294 "superblock": true, 00:27:23.294 "num_base_bdevs": 3, 00:27:23.294 "num_base_bdevs_discovered": 3, 00:27:23.294 "num_base_bdevs_operational": 3, 00:27:23.294 "base_bdevs_list": [ 00:27:23.294 { 00:27:23.294 "name": "spare", 00:27:23.294 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:23.294 "is_configured": true, 00:27:23.294 "data_offset": 2048, 00:27:23.294 "data_size": 63488 00:27:23.294 }, 00:27:23.294 { 00:27:23.294 "name": "BaseBdev2", 00:27:23.294 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:23.294 "is_configured": true, 00:27:23.294 "data_offset": 2048, 00:27:23.294 "data_size": 63488 00:27:23.294 }, 00:27:23.294 { 00:27:23.294 "name": "BaseBdev3", 00:27:23.294 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:23.294 "is_configured": true, 00:27:23.294 "data_offset": 2048, 00:27:23.294 "data_size": 63488 00:27:23.294 } 00:27:23.294 ] 00:27:23.294 }' 00:27:23.294 13:11:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:23.562 13:11:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@660 -- # break 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.563 13:11:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:23.821 "name": "raid_bdev1", 00:27:23.821 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:23.821 "strip_size_kb": 64, 00:27:23.821 "state": "online", 00:27:23.821 "raid_level": "raid5f", 00:27:23.821 "superblock": true, 00:27:23.821 "num_base_bdevs": 3, 00:27:23.821 "num_base_bdevs_discovered": 3, 00:27:23.821 "num_base_bdevs_operational": 3, 00:27:23.821 "base_bdevs_list": [ 00:27:23.821 { 00:27:23.821 "name": "spare", 00:27:23.821 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:23.821 "is_configured": true, 00:27:23.821 "data_offset": 2048, 00:27:23.821 "data_size": 63488 00:27:23.821 }, 00:27:23.821 { 00:27:23.821 "name": "BaseBdev2", 00:27:23.821 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:23.821 "is_configured": true, 00:27:23.821 "data_offset": 2048, 00:27:23.821 "data_size": 63488 00:27:23.821 }, 00:27:23.821 { 00:27:23.821 "name": "BaseBdev3", 00:27:23.821 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:23.821 "is_configured": true, 00:27:23.821 "data_offset": 2048, 00:27:23.821 "data_size": 63488 00:27:23.821 } 00:27:23.821 ] 00:27:23.821 }' 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.821 13:11:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.083 13:11:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:24.083 "name": "raid_bdev1", 00:27:24.083 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:24.083 "strip_size_kb": 64, 00:27:24.083 "state": "online", 00:27:24.084 "raid_level": "raid5f", 00:27:24.084 "superblock": true, 00:27:24.084 "num_base_bdevs": 3, 00:27:24.084 "num_base_bdevs_discovered": 3, 00:27:24.084 "num_base_bdevs_operational": 3, 00:27:24.084 "base_bdevs_list": [ 00:27:24.084 { 00:27:24.084 "name": "spare", 00:27:24.084 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:24.084 "is_configured": true, 00:27:24.084 "data_offset": 2048, 00:27:24.084 "data_size": 63488 00:27:24.084 }, 00:27:24.084 { 00:27:24.084 "name": "BaseBdev2", 00:27:24.084 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:24.084 "is_configured": true, 00:27:24.084 "data_offset": 2048, 00:27:24.084 "data_size": 63488 00:27:24.084 }, 00:27:24.084 { 00:27:24.084 "name": "BaseBdev3", 00:27:24.084 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:24.084 "is_configured": true, 00:27:24.084 "data_offset": 2048, 00:27:24.084 "data_size": 63488 00:27:24.084 } 00:27:24.084 ] 00:27:24.084 }' 00:27:24.084 13:11:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:24.084 13:11:28 -- common/autotest_common.sh@10 -- # set +x 00:27:25.021 13:11:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:25.279 [2024-04-17 13:11:29.279669] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:25.279 [2024-04-17 13:11:29.279726] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:25.279 [2024-04-17 13:11:29.279854] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:25.279 [2024-04-17 13:11:29.279956] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:25.279 [2024-04-17 13:11:29.279969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:27:25.279 13:11:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.279 13:11:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:25.536 13:11:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:25.536 13:11:29 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:25.536 13:11:29 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@12 -- # local i 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:25.536 13:11:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:25.794 /dev/nbd0 00:27:25.794 13:11:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:25.794 13:11:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:25.794 13:11:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:25.794 13:11:29 -- common/autotest_common.sh@855 -- # local i 00:27:25.794 13:11:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:25.794 13:11:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:25.794 13:11:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:25.794 13:11:29 -- common/autotest_common.sh@859 -- # break 00:27:25.794 13:11:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:25.794 13:11:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:25.794 13:11:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:25.794 1+0 records in 00:27:25.794 1+0 records out 00:27:25.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335632 s, 12.2 MB/s 00:27:26.051 13:11:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.051 13:11:29 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.051 13:11:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.051 13:11:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.051 13:11:29 -- common/autotest_common.sh@875 -- # return 0 00:27:26.051 13:11:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.051 13:11:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:26.051 13:11:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:26.051 /dev/nbd1 00:27:26.308 13:11:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:26.308 13:11:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:26.308 13:11:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:26.308 13:11:30 -- common/autotest_common.sh@855 -- # local i 00:27:26.309 13:11:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:26.309 13:11:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:26.309 13:11:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:26.309 13:11:30 -- common/autotest_common.sh@859 -- # break 00:27:26.309 13:11:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:26.309 13:11:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:26.309 13:11:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.309 1+0 records in 00:27:26.309 1+0 records out 00:27:26.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463938 s, 8.8 MB/s 00:27:26.309 13:11:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.309 13:11:30 -- common/autotest_common.sh@872 -- # size=4096 00:27:26.309 13:11:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.309 13:11:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:26.309 13:11:30 -- common/autotest_common.sh@875 -- # return 0 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:26.309 13:11:30 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:26.309 13:11:30 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@51 -- # local i 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.309 13:11:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:26.565 13:11:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:26.565 13:11:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:26.565 13:11:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:26.566 13:11:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.566 13:11:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.566 13:11:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:26.566 13:11:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@41 -- # break 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.823 13:11:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@41 -- # break 00:27:27.081 13:11:31 -- bdev/nbd_common.sh@45 -- # return 0 00:27:27.081 13:11:31 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:27:27.081 13:11:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:27.081 13:11:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:27:27.081 13:11:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:27.338 13:11:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:27.596 [2024-04-17 13:11:31.488019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:27.596 [2024-04-17 13:11:31.488141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.596 [2024-04-17 13:11:31.488181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:27.596 [2024-04-17 13:11:31.488210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.596 [2024-04-17 13:11:31.490782] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.596 [2024-04-17 13:11:31.490867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:27.596 [2024-04-17 13:11:31.491002] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:27.596 [2024-04-17 13:11:31.491086] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:27.596 BaseBdev1 00:27:27.596 13:11:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:27.596 13:11:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:27:27.596 13:11:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:27:27.854 13:11:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:28.112 [2024-04-17 13:11:32.056140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:28.112 [2024-04-17 13:11:32.056252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.112 [2024-04-17 13:11:32.056300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:28.112 [2024-04-17 13:11:32.056330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.112 [2024-04-17 13:11:32.056881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.112 [2024-04-17 13:11:32.056947] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:28.112 [2024-04-17 13:11:32.057066] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:27:28.112 [2024-04-17 13:11:32.057082] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:27:28.112 [2024-04-17 13:11:32.057090] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:28.112 [2024-04-17 13:11:32.057111] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:27:28.112 [2024-04-17 13:11:32.057179] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:28.112 BaseBdev2 00:27:28.112 13:11:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:28.112 13:11:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:27:28.112 13:11:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:27:28.370 13:11:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:28.628 [2024-04-17 13:11:32.636297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:28.628 [2024-04-17 13:11:32.636420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.628 [2024-04-17 13:11:32.636474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:27:28.628 [2024-04-17 13:11:32.636497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.628 [2024-04-17 13:11:32.637034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.628 [2024-04-17 13:11:32.637106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:28.628 [2024-04-17 13:11:32.637255] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:27:28.628 [2024-04-17 13:11:32.637320] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:28.628 BaseBdev3 00:27:28.628 13:11:32 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:28.886 13:11:32 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:29.145 [2024-04-17 13:11:33.160415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:29.145 [2024-04-17 13:11:33.160539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.145 [2024-04-17 13:11:33.160585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:29.145 [2024-04-17 13:11:33.160617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.145 [2024-04-17 13:11:33.161196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.145 [2024-04-17 13:11:33.161280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:29.145 [2024-04-17 13:11:33.161407] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:27:29.145 [2024-04-17 13:11:33.161453] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:29.145 spare 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:29.145 13:11:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:29.146 13:11:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:29.146 13:11:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:29.146 13:11:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.146 13:11:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.146 [2024-04-17 13:11:33.261585] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:27:29.146 [2024-04-17 13:11:33.261634] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:29.146 [2024-04-17 13:11:33.261813] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004afe0 00:27:29.146 [2024-04-17 13:11:33.266848] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:27:29.146 [2024-04-17 13:11:33.266880] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:27:29.146 [2024-04-17 13:11:33.267094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.403 13:11:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:29.403 "name": "raid_bdev1", 00:27:29.403 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:29.403 "strip_size_kb": 64, 00:27:29.403 "state": "online", 00:27:29.403 "raid_level": "raid5f", 00:27:29.403 "superblock": true, 00:27:29.403 "num_base_bdevs": 3, 00:27:29.403 "num_base_bdevs_discovered": 3, 00:27:29.403 "num_base_bdevs_operational": 3, 00:27:29.403 "base_bdevs_list": [ 00:27:29.403 { 00:27:29.403 "name": "spare", 00:27:29.403 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:29.403 "is_configured": true, 00:27:29.403 "data_offset": 2048, 00:27:29.403 "data_size": 63488 00:27:29.403 }, 00:27:29.403 { 00:27:29.403 "name": "BaseBdev2", 00:27:29.403 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:29.403 "is_configured": true, 00:27:29.403 "data_offset": 2048, 00:27:29.403 "data_size": 63488 00:27:29.403 }, 00:27:29.403 { 00:27:29.403 "name": "BaseBdev3", 00:27:29.403 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:29.403 "is_configured": true, 00:27:29.403 "data_offset": 2048, 00:27:29.403 "data_size": 63488 00:27:29.403 } 00:27:29.403 ] 00:27:29.403 }' 00:27:29.403 13:11:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:29.403 13:11:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.342 13:11:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:30.342 "name": "raid_bdev1", 00:27:30.342 "uuid": "3d9a3033-d515-4ccb-8c59-cf9a6c88eaee", 00:27:30.342 "strip_size_kb": 64, 00:27:30.342 "state": "online", 00:27:30.342 "raid_level": "raid5f", 00:27:30.342 "superblock": true, 00:27:30.342 "num_base_bdevs": 3, 00:27:30.342 "num_base_bdevs_discovered": 3, 00:27:30.342 "num_base_bdevs_operational": 3, 00:27:30.342 "base_bdevs_list": [ 00:27:30.342 { 00:27:30.342 "name": "spare", 00:27:30.342 "uuid": "44e6b83b-a226-5f2b-a13c-4c45fc6e4a2d", 00:27:30.342 "is_configured": true, 00:27:30.342 "data_offset": 2048, 00:27:30.342 "data_size": 63488 00:27:30.342 }, 00:27:30.342 { 00:27:30.342 "name": "BaseBdev2", 00:27:30.342 "uuid": "39335f4e-8a18-5b0d-a459-75fa1ddb444d", 00:27:30.342 "is_configured": true, 00:27:30.342 "data_offset": 2048, 00:27:30.342 "data_size": 63488 00:27:30.342 }, 00:27:30.342 { 00:27:30.342 "name": "BaseBdev3", 00:27:30.342 "uuid": "3e2f7242-2ab9-5eaf-8e9a-40f1a7f9c219", 00:27:30.342 "is_configured": true, 00:27:30.342 "data_offset": 2048, 00:27:30.342 "data_size": 63488 00:27:30.343 } 00:27:30.343 ] 00:27:30.343 }' 00:27:30.343 13:11:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:30.601 13:11:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:30.601 13:11:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:30.601 13:11:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:30.601 13:11:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.601 13:11:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:30.859 13:11:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:27:30.859 13:11:34 -- bdev/bdev_raid.sh@709 -- # killprocess 137762 00:27:30.859 13:11:34 -- common/autotest_common.sh@924 -- # '[' -z 137762 ']' 00:27:30.859 13:11:34 -- common/autotest_common.sh@928 -- # kill -0 137762 00:27:30.859 13:11:34 -- common/autotest_common.sh@929 -- # uname 00:27:30.859 13:11:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:30.859 13:11:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 137762 00:27:30.859 13:11:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:30.859 13:11:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:30.859 13:11:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 137762' 00:27:30.859 killing process with pid 137762 00:27:30.859 Received shutdown signal, test time was about 60.000000 seconds 00:27:30.859 00:27:30.859 Latency(us) 00:27:30.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.859 =================================================================================================================== 00:27:30.859 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:30.859 13:11:34 -- common/autotest_common.sh@943 -- # kill 137762 00:27:30.859 13:11:34 -- common/autotest_common.sh@948 -- # wait 137762 00:27:30.859 [2024-04-17 13:11:34.836174] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:30.859 [2024-04-17 13:11:34.836273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:30.859 [2024-04-17 13:11:34.836380] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:30.859 [2024-04-17 13:11:34.836402] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:27:31.117 [2024-04-17 13:11:35.168485] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:32.490 00:27:32.490 real 0m27.232s 00:27:32.490 user 0m43.467s 00:27:32.490 sys 0m3.015s 00:27:32.490 13:11:36 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:27:32.490 13:11:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.490 ************************************ 00:27:32.490 END TEST raid5f_rebuild_test_sb 00:27:32.490 ************************************ 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:27:32.490 13:11:36 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:27:32.490 13:11:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:32.490 13:11:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.490 ************************************ 00:27:32.490 START TEST raid5f_state_function_test 00:27:32.490 ************************************ 00:27:32.490 13:11:36 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid5f 4 false 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=138469 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138469' 00:27:32.490 Process raid pid: 138469 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:32.490 13:11:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138469 /var/tmp/spdk-raid.sock 00:27:32.490 13:11:36 -- common/autotest_common.sh@817 -- # '[' -z 138469 ']' 00:27:32.490 13:11:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:32.490 13:11:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:32.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:32.490 13:11:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:32.490 13:11:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:32.490 13:11:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.490 [2024-04-17 13:11:36.417839] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:27:32.490 [2024-04-17 13:11:36.418004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.490 [2024-04-17 13:11:36.574182] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.748 [2024-04-17 13:11:36.786263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.006 [2024-04-17 13:11:36.987353] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:33.264 13:11:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:33.264 13:11:37 -- common/autotest_common.sh@850 -- # return 0 00:27:33.265 13:11:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:33.523 [2024-04-17 13:11:37.637357] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:33.523 [2024-04-17 13:11:37.637456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:33.523 [2024-04-17 13:11:37.637472] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:33.523 [2024-04-17 13:11:37.637495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:33.523 [2024-04-17 13:11:37.637503] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:33.523 [2024-04-17 13:11:37.637543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:33.523 [2024-04-17 13:11:37.637553] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:33.523 [2024-04-17 13:11:37.637576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.523 13:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.781 13:11:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:33.781 "name": "Existed_Raid", 00:27:33.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.781 "strip_size_kb": 64, 00:27:33.781 "state": "configuring", 00:27:33.781 "raid_level": "raid5f", 00:27:33.781 "superblock": false, 00:27:33.781 "num_base_bdevs": 4, 00:27:33.781 "num_base_bdevs_discovered": 0, 00:27:33.781 "num_base_bdevs_operational": 4, 00:27:33.781 "base_bdevs_list": [ 00:27:33.781 { 00:27:33.781 "name": "BaseBdev1", 00:27:33.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.781 "is_configured": false, 00:27:33.781 "data_offset": 0, 00:27:33.781 "data_size": 0 00:27:33.781 }, 00:27:33.781 { 00:27:33.781 "name": "BaseBdev2", 00:27:33.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.781 "is_configured": false, 00:27:33.781 "data_offset": 0, 00:27:33.781 "data_size": 0 00:27:33.781 }, 00:27:33.781 { 00:27:33.781 "name": "BaseBdev3", 00:27:33.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.781 "is_configured": false, 00:27:33.781 "data_offset": 0, 00:27:33.781 "data_size": 0 00:27:33.781 }, 00:27:33.781 { 00:27:33.781 "name": "BaseBdev4", 00:27:33.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.781 "is_configured": false, 00:27:33.781 "data_offset": 0, 00:27:33.781 "data_size": 0 00:27:33.781 } 00:27:33.781 ] 00:27:33.781 }' 00:27:33.781 13:11:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:33.781 13:11:37 -- common/autotest_common.sh@10 -- # set +x 00:27:34.768 13:11:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:34.768 [2024-04-17 13:11:38.821479] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:34.768 [2024-04-17 13:11:38.821522] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:27:34.768 13:11:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:35.027 [2024-04-17 13:11:39.081570] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:35.027 [2024-04-17 13:11:39.081660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:35.027 [2024-04-17 13:11:39.081675] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:35.027 [2024-04-17 13:11:39.081702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:35.027 [2024-04-17 13:11:39.081711] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:35.027 [2024-04-17 13:11:39.081748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:35.027 [2024-04-17 13:11:39.081756] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:35.027 [2024-04-17 13:11:39.081780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:35.027 13:11:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:35.285 [2024-04-17 13:11:39.369068] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.285 BaseBdev1 00:27:35.285 13:11:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:35.285 13:11:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:35.285 13:11:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:35.285 13:11:39 -- common/autotest_common.sh@887 -- # local i 00:27:35.285 13:11:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:35.285 13:11:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:35.285 13:11:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:35.543 13:11:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:35.802 [ 00:27:35.802 { 00:27:35.802 "name": "BaseBdev1", 00:27:35.802 "aliases": [ 00:27:35.802 "5d918f65-8766-4e17-927d-169da3c90060" 00:27:35.802 ], 00:27:35.802 "product_name": "Malloc disk", 00:27:35.802 "block_size": 512, 00:27:35.802 "num_blocks": 65536, 00:27:35.802 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:35.802 "assigned_rate_limits": { 00:27:35.802 "rw_ios_per_sec": 0, 00:27:35.802 "rw_mbytes_per_sec": 0, 00:27:35.802 "r_mbytes_per_sec": 0, 00:27:35.802 "w_mbytes_per_sec": 0 00:27:35.802 }, 00:27:35.802 "claimed": true, 00:27:35.802 "claim_type": "exclusive_write", 00:27:35.802 "zoned": false, 00:27:35.802 "supported_io_types": { 00:27:35.802 "read": true, 00:27:35.802 "write": true, 00:27:35.802 "unmap": true, 00:27:35.802 "write_zeroes": true, 00:27:35.802 "flush": true, 00:27:35.802 "reset": true, 00:27:35.802 "compare": false, 00:27:35.802 "compare_and_write": false, 00:27:35.802 "abort": true, 00:27:35.802 "nvme_admin": false, 00:27:35.802 "nvme_io": false 00:27:35.802 }, 00:27:35.802 "memory_domains": [ 00:27:35.802 { 00:27:35.802 "dma_device_id": "system", 00:27:35.802 "dma_device_type": 1 00:27:35.802 }, 00:27:35.802 { 00:27:35.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:35.802 "dma_device_type": 2 00:27:35.802 } 00:27:35.802 ], 00:27:35.802 "driver_specific": {} 00:27:35.802 } 00:27:35.802 ] 00:27:35.802 13:11:39 -- common/autotest_common.sh@893 -- # return 0 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.803 13:11:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.061 13:11:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:36.061 "name": "Existed_Raid", 00:27:36.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.061 "strip_size_kb": 64, 00:27:36.061 "state": "configuring", 00:27:36.061 "raid_level": "raid5f", 00:27:36.061 "superblock": false, 00:27:36.061 "num_base_bdevs": 4, 00:27:36.061 "num_base_bdevs_discovered": 1, 00:27:36.061 "num_base_bdevs_operational": 4, 00:27:36.061 "base_bdevs_list": [ 00:27:36.061 { 00:27:36.062 "name": "BaseBdev1", 00:27:36.062 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:36.062 "is_configured": true, 00:27:36.062 "data_offset": 0, 00:27:36.062 "data_size": 65536 00:27:36.062 }, 00:27:36.062 { 00:27:36.062 "name": "BaseBdev2", 00:27:36.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.062 "is_configured": false, 00:27:36.062 "data_offset": 0, 00:27:36.062 "data_size": 0 00:27:36.062 }, 00:27:36.062 { 00:27:36.062 "name": "BaseBdev3", 00:27:36.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.062 "is_configured": false, 00:27:36.062 "data_offset": 0, 00:27:36.062 "data_size": 0 00:27:36.062 }, 00:27:36.062 { 00:27:36.062 "name": "BaseBdev4", 00:27:36.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.062 "is_configured": false, 00:27:36.062 "data_offset": 0, 00:27:36.062 "data_size": 0 00:27:36.062 } 00:27:36.062 ] 00:27:36.062 }' 00:27:36.062 13:11:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:36.062 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:27:36.995 13:11:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:36.995 [2024-04-17 13:11:41.053497] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:36.995 [2024-04-17 13:11:41.053573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:27:36.995 13:11:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:27:36.995 13:11:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:37.253 [2024-04-17 13:11:41.285649] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:37.253 [2024-04-17 13:11:41.287907] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:37.253 [2024-04-17 13:11:41.288003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:37.253 [2024-04-17 13:11:41.288017] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:37.253 [2024-04-17 13:11:41.288045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:37.253 [2024-04-17 13:11:41.288054] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:37.253 [2024-04-17 13:11:41.288071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.253 13:11:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:37.512 13:11:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:37.512 "name": "Existed_Raid", 00:27:37.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.512 "strip_size_kb": 64, 00:27:37.512 "state": "configuring", 00:27:37.512 "raid_level": "raid5f", 00:27:37.512 "superblock": false, 00:27:37.512 "num_base_bdevs": 4, 00:27:37.512 "num_base_bdevs_discovered": 1, 00:27:37.512 "num_base_bdevs_operational": 4, 00:27:37.512 "base_bdevs_list": [ 00:27:37.512 { 00:27:37.512 "name": "BaseBdev1", 00:27:37.512 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:37.512 "is_configured": true, 00:27:37.512 "data_offset": 0, 00:27:37.512 "data_size": 65536 00:27:37.512 }, 00:27:37.512 { 00:27:37.512 "name": "BaseBdev2", 00:27:37.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.512 "is_configured": false, 00:27:37.512 "data_offset": 0, 00:27:37.512 "data_size": 0 00:27:37.512 }, 00:27:37.512 { 00:27:37.512 "name": "BaseBdev3", 00:27:37.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.512 "is_configured": false, 00:27:37.512 "data_offset": 0, 00:27:37.512 "data_size": 0 00:27:37.512 }, 00:27:37.512 { 00:27:37.512 "name": "BaseBdev4", 00:27:37.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.512 "is_configured": false, 00:27:37.512 "data_offset": 0, 00:27:37.512 "data_size": 0 00:27:37.512 } 00:27:37.512 ] 00:27:37.512 }' 00:27:37.512 13:11:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:37.512 13:11:41 -- common/autotest_common.sh@10 -- # set +x 00:27:38.449 13:11:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:38.708 [2024-04-17 13:11:42.628373] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.708 BaseBdev2 00:27:38.708 13:11:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:38.708 13:11:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:38.708 13:11:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:38.708 13:11:42 -- common/autotest_common.sh@887 -- # local i 00:27:38.708 13:11:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:38.708 13:11:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:38.708 13:11:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:38.966 13:11:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:39.225 [ 00:27:39.225 { 00:27:39.225 "name": "BaseBdev2", 00:27:39.225 "aliases": [ 00:27:39.225 "702687c2-3aeb-4619-b14c-64682203bdb9" 00:27:39.225 ], 00:27:39.225 "product_name": "Malloc disk", 00:27:39.225 "block_size": 512, 00:27:39.225 "num_blocks": 65536, 00:27:39.225 "uuid": "702687c2-3aeb-4619-b14c-64682203bdb9", 00:27:39.225 "assigned_rate_limits": { 00:27:39.225 "rw_ios_per_sec": 0, 00:27:39.225 "rw_mbytes_per_sec": 0, 00:27:39.225 "r_mbytes_per_sec": 0, 00:27:39.225 "w_mbytes_per_sec": 0 00:27:39.225 }, 00:27:39.225 "claimed": true, 00:27:39.225 "claim_type": "exclusive_write", 00:27:39.225 "zoned": false, 00:27:39.225 "supported_io_types": { 00:27:39.225 "read": true, 00:27:39.225 "write": true, 00:27:39.225 "unmap": true, 00:27:39.225 "write_zeroes": true, 00:27:39.225 "flush": true, 00:27:39.225 "reset": true, 00:27:39.225 "compare": false, 00:27:39.225 "compare_and_write": false, 00:27:39.225 "abort": true, 00:27:39.225 "nvme_admin": false, 00:27:39.225 "nvme_io": false 00:27:39.225 }, 00:27:39.225 "memory_domains": [ 00:27:39.225 { 00:27:39.225 "dma_device_id": "system", 00:27:39.225 "dma_device_type": 1 00:27:39.225 }, 00:27:39.225 { 00:27:39.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.225 "dma_device_type": 2 00:27:39.225 } 00:27:39.225 ], 00:27:39.225 "driver_specific": {} 00:27:39.225 } 00:27:39.225 ] 00:27:39.225 13:11:43 -- common/autotest_common.sh@893 -- # return 0 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.225 13:11:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.483 13:11:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:39.483 "name": "Existed_Raid", 00:27:39.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.483 "strip_size_kb": 64, 00:27:39.483 "state": "configuring", 00:27:39.483 "raid_level": "raid5f", 00:27:39.483 "superblock": false, 00:27:39.483 "num_base_bdevs": 4, 00:27:39.483 "num_base_bdevs_discovered": 2, 00:27:39.483 "num_base_bdevs_operational": 4, 00:27:39.483 "base_bdevs_list": [ 00:27:39.483 { 00:27:39.483 "name": "BaseBdev1", 00:27:39.483 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:39.483 "is_configured": true, 00:27:39.483 "data_offset": 0, 00:27:39.483 "data_size": 65536 00:27:39.483 }, 00:27:39.483 { 00:27:39.483 "name": "BaseBdev2", 00:27:39.483 "uuid": "702687c2-3aeb-4619-b14c-64682203bdb9", 00:27:39.483 "is_configured": true, 00:27:39.483 "data_offset": 0, 00:27:39.483 "data_size": 65536 00:27:39.483 }, 00:27:39.483 { 00:27:39.483 "name": "BaseBdev3", 00:27:39.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.483 "is_configured": false, 00:27:39.483 "data_offset": 0, 00:27:39.483 "data_size": 0 00:27:39.483 }, 00:27:39.483 { 00:27:39.483 "name": "BaseBdev4", 00:27:39.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.483 "is_configured": false, 00:27:39.483 "data_offset": 0, 00:27:39.483 "data_size": 0 00:27:39.483 } 00:27:39.483 ] 00:27:39.483 }' 00:27:39.483 13:11:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:39.483 13:11:43 -- common/autotest_common.sh@10 -- # set +x 00:27:40.417 13:11:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:40.675 [2024-04-17 13:11:44.575280] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:40.675 BaseBdev3 00:27:40.675 13:11:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:40.675 13:11:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:40.675 13:11:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:40.675 13:11:44 -- common/autotest_common.sh@887 -- # local i 00:27:40.675 13:11:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:40.675 13:11:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:40.675 13:11:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:40.933 13:11:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:41.192 [ 00:27:41.192 { 00:27:41.192 "name": "BaseBdev3", 00:27:41.192 "aliases": [ 00:27:41.192 "b19e37d9-6b40-48fd-8ca6-2aadf9f91380" 00:27:41.192 ], 00:27:41.192 "product_name": "Malloc disk", 00:27:41.192 "block_size": 512, 00:27:41.192 "num_blocks": 65536, 00:27:41.192 "uuid": "b19e37d9-6b40-48fd-8ca6-2aadf9f91380", 00:27:41.192 "assigned_rate_limits": { 00:27:41.192 "rw_ios_per_sec": 0, 00:27:41.192 "rw_mbytes_per_sec": 0, 00:27:41.192 "r_mbytes_per_sec": 0, 00:27:41.192 "w_mbytes_per_sec": 0 00:27:41.192 }, 00:27:41.192 "claimed": true, 00:27:41.192 "claim_type": "exclusive_write", 00:27:41.192 "zoned": false, 00:27:41.192 "supported_io_types": { 00:27:41.192 "read": true, 00:27:41.192 "write": true, 00:27:41.192 "unmap": true, 00:27:41.192 "write_zeroes": true, 00:27:41.192 "flush": true, 00:27:41.192 "reset": true, 00:27:41.192 "compare": false, 00:27:41.192 "compare_and_write": false, 00:27:41.192 "abort": true, 00:27:41.192 "nvme_admin": false, 00:27:41.192 "nvme_io": false 00:27:41.192 }, 00:27:41.192 "memory_domains": [ 00:27:41.192 { 00:27:41.192 "dma_device_id": "system", 00:27:41.192 "dma_device_type": 1 00:27:41.192 }, 00:27:41.192 { 00:27:41.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.192 "dma_device_type": 2 00:27:41.192 } 00:27:41.192 ], 00:27:41.192 "driver_specific": {} 00:27:41.192 } 00:27:41.192 ] 00:27:41.192 13:11:45 -- common/autotest_common.sh@893 -- # return 0 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.192 13:11:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.450 13:11:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:41.450 "name": "Existed_Raid", 00:27:41.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.450 "strip_size_kb": 64, 00:27:41.450 "state": "configuring", 00:27:41.450 "raid_level": "raid5f", 00:27:41.450 "superblock": false, 00:27:41.450 "num_base_bdevs": 4, 00:27:41.450 "num_base_bdevs_discovered": 3, 00:27:41.450 "num_base_bdevs_operational": 4, 00:27:41.450 "base_bdevs_list": [ 00:27:41.450 { 00:27:41.450 "name": "BaseBdev1", 00:27:41.450 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:41.450 "is_configured": true, 00:27:41.450 "data_offset": 0, 00:27:41.450 "data_size": 65536 00:27:41.450 }, 00:27:41.450 { 00:27:41.450 "name": "BaseBdev2", 00:27:41.450 "uuid": "702687c2-3aeb-4619-b14c-64682203bdb9", 00:27:41.450 "is_configured": true, 00:27:41.450 "data_offset": 0, 00:27:41.450 "data_size": 65536 00:27:41.450 }, 00:27:41.450 { 00:27:41.450 "name": "BaseBdev3", 00:27:41.450 "uuid": "b19e37d9-6b40-48fd-8ca6-2aadf9f91380", 00:27:41.450 "is_configured": true, 00:27:41.450 "data_offset": 0, 00:27:41.450 "data_size": 65536 00:27:41.450 }, 00:27:41.450 { 00:27:41.450 "name": "BaseBdev4", 00:27:41.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.450 "is_configured": false, 00:27:41.450 "data_offset": 0, 00:27:41.450 "data_size": 0 00:27:41.450 } 00:27:41.450 ] 00:27:41.450 }' 00:27:41.450 13:11:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:41.450 13:11:45 -- common/autotest_common.sh@10 -- # set +x 00:27:42.016 13:11:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:42.583 [2024-04-17 13:11:46.461904] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:42.583 [2024-04-17 13:11:46.461994] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:27:42.583 [2024-04-17 13:11:46.462007] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:42.583 [2024-04-17 13:11:46.462154] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:27:42.583 [2024-04-17 13:11:46.469080] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:27:42.583 [2024-04-17 13:11:46.469112] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:27:42.583 [2024-04-17 13:11:46.469413] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.583 BaseBdev4 00:27:42.583 13:11:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:42.583 13:11:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:27:42.583 13:11:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:42.583 13:11:46 -- common/autotest_common.sh@887 -- # local i 00:27:42.583 13:11:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:42.583 13:11:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:42.583 13:11:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:42.841 13:11:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:43.100 [ 00:27:43.100 { 00:27:43.100 "name": "BaseBdev4", 00:27:43.100 "aliases": [ 00:27:43.100 "4cae1445-3cd5-47d8-94fb-65ebd7b38b5a" 00:27:43.100 ], 00:27:43.100 "product_name": "Malloc disk", 00:27:43.100 "block_size": 512, 00:27:43.100 "num_blocks": 65536, 00:27:43.100 "uuid": "4cae1445-3cd5-47d8-94fb-65ebd7b38b5a", 00:27:43.100 "assigned_rate_limits": { 00:27:43.100 "rw_ios_per_sec": 0, 00:27:43.100 "rw_mbytes_per_sec": 0, 00:27:43.100 "r_mbytes_per_sec": 0, 00:27:43.100 "w_mbytes_per_sec": 0 00:27:43.100 }, 00:27:43.100 "claimed": true, 00:27:43.100 "claim_type": "exclusive_write", 00:27:43.100 "zoned": false, 00:27:43.100 "supported_io_types": { 00:27:43.100 "read": true, 00:27:43.100 "write": true, 00:27:43.100 "unmap": true, 00:27:43.100 "write_zeroes": true, 00:27:43.100 "flush": true, 00:27:43.100 "reset": true, 00:27:43.100 "compare": false, 00:27:43.100 "compare_and_write": false, 00:27:43.100 "abort": true, 00:27:43.100 "nvme_admin": false, 00:27:43.100 "nvme_io": false 00:27:43.100 }, 00:27:43.100 "memory_domains": [ 00:27:43.100 { 00:27:43.100 "dma_device_id": "system", 00:27:43.100 "dma_device_type": 1 00:27:43.100 }, 00:27:43.100 { 00:27:43.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.100 "dma_device_type": 2 00:27:43.100 } 00:27:43.100 ], 00:27:43.100 "driver_specific": {} 00:27:43.100 } 00:27:43.100 ] 00:27:43.100 13:11:47 -- common/autotest_common.sh@893 -- # return 0 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.100 13:11:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.359 13:11:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:43.359 "name": "Existed_Raid", 00:27:43.359 "uuid": "01f9170d-f1f7-4972-af26-15d35b542a3f", 00:27:43.359 "strip_size_kb": 64, 00:27:43.359 "state": "online", 00:27:43.359 "raid_level": "raid5f", 00:27:43.359 "superblock": false, 00:27:43.359 "num_base_bdevs": 4, 00:27:43.359 "num_base_bdevs_discovered": 4, 00:27:43.359 "num_base_bdevs_operational": 4, 00:27:43.359 "base_bdevs_list": [ 00:27:43.359 { 00:27:43.359 "name": "BaseBdev1", 00:27:43.359 "uuid": "5d918f65-8766-4e17-927d-169da3c90060", 00:27:43.359 "is_configured": true, 00:27:43.359 "data_offset": 0, 00:27:43.359 "data_size": 65536 00:27:43.359 }, 00:27:43.359 { 00:27:43.359 "name": "BaseBdev2", 00:27:43.359 "uuid": "702687c2-3aeb-4619-b14c-64682203bdb9", 00:27:43.359 "is_configured": true, 00:27:43.359 "data_offset": 0, 00:27:43.359 "data_size": 65536 00:27:43.359 }, 00:27:43.359 { 00:27:43.359 "name": "BaseBdev3", 00:27:43.359 "uuid": "b19e37d9-6b40-48fd-8ca6-2aadf9f91380", 00:27:43.359 "is_configured": true, 00:27:43.359 "data_offset": 0, 00:27:43.359 "data_size": 65536 00:27:43.359 }, 00:27:43.359 { 00:27:43.359 "name": "BaseBdev4", 00:27:43.359 "uuid": "4cae1445-3cd5-47d8-94fb-65ebd7b38b5a", 00:27:43.359 "is_configured": true, 00:27:43.359 "data_offset": 0, 00:27:43.359 "data_size": 65536 00:27:43.359 } 00:27:43.359 ] 00:27:43.359 }' 00:27:43.359 13:11:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:43.359 13:11:47 -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 13:11:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:44.184 [2024-04-17 13:11:48.329333] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.443 13:11:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.702 13:11:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:44.702 "name": "Existed_Raid", 00:27:44.702 "uuid": "01f9170d-f1f7-4972-af26-15d35b542a3f", 00:27:44.702 "strip_size_kb": 64, 00:27:44.702 "state": "online", 00:27:44.702 "raid_level": "raid5f", 00:27:44.702 "superblock": false, 00:27:44.702 "num_base_bdevs": 4, 00:27:44.702 "num_base_bdevs_discovered": 3, 00:27:44.702 "num_base_bdevs_operational": 3, 00:27:44.702 "base_bdevs_list": [ 00:27:44.702 { 00:27:44.702 "name": null, 00:27:44.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.702 "is_configured": false, 00:27:44.702 "data_offset": 0, 00:27:44.702 "data_size": 65536 00:27:44.702 }, 00:27:44.702 { 00:27:44.702 "name": "BaseBdev2", 00:27:44.702 "uuid": "702687c2-3aeb-4619-b14c-64682203bdb9", 00:27:44.702 "is_configured": true, 00:27:44.702 "data_offset": 0, 00:27:44.702 "data_size": 65536 00:27:44.702 }, 00:27:44.702 { 00:27:44.702 "name": "BaseBdev3", 00:27:44.702 "uuid": "b19e37d9-6b40-48fd-8ca6-2aadf9f91380", 00:27:44.702 "is_configured": true, 00:27:44.702 "data_offset": 0, 00:27:44.702 "data_size": 65536 00:27:44.702 }, 00:27:44.702 { 00:27:44.702 "name": "BaseBdev4", 00:27:44.702 "uuid": "4cae1445-3cd5-47d8-94fb-65ebd7b38b5a", 00:27:44.702 "is_configured": true, 00:27:44.702 "data_offset": 0, 00:27:44.702 "data_size": 65536 00:27:44.702 } 00:27:44.702 ] 00:27:44.702 }' 00:27:44.702 13:11:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:44.702 13:11:48 -- common/autotest_common.sh@10 -- # set +x 00:27:45.268 13:11:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:45.268 13:11:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:45.268 13:11:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.268 13:11:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:45.526 13:11:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:45.526 13:11:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:45.526 13:11:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:45.784 [2024-04-17 13:11:49.858957] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:45.784 [2024-04-17 13:11:49.859078] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:46.042 [2024-04-17 13:11:49.942304] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:46.042 13:11:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:46.042 13:11:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:46.042 13:11:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.042 13:11:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:46.300 13:11:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:46.300 13:11:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:46.300 13:11:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:46.557 [2024-04-17 13:11:50.470927] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:46.557 13:11:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:46.557 13:11:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:46.557 13:11:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.557 13:11:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:46.816 13:11:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:46.816 13:11:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:46.816 13:11:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:47.075 [2024-04-17 13:11:51.107032] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:47.075 [2024-04-17 13:11:51.107130] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:27:47.075 13:11:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:47.075 13:11:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:47.075 13:11:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.075 13:11:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:47.334 13:11:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:47.334 13:11:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:47.334 13:11:51 -- bdev/bdev_raid.sh@287 -- # killprocess 138469 00:27:47.334 13:11:51 -- common/autotest_common.sh@924 -- # '[' -z 138469 ']' 00:27:47.334 13:11:51 -- common/autotest_common.sh@928 -- # kill -0 138469 00:27:47.334 13:11:51 -- common/autotest_common.sh@929 -- # uname 00:27:47.334 13:11:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:27:47.334 13:11:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 138469 00:27:47.592 killing process with pid 138469 00:27:47.592 13:11:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:27:47.592 13:11:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:27:47.592 13:11:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 138469' 00:27:47.592 13:11:51 -- common/autotest_common.sh@943 -- # kill 138469 00:27:47.592 13:11:51 -- common/autotest_common.sh@948 -- # wait 138469 00:27:47.592 [2024-04-17 13:11:51.482845] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.593 [2024-04-17 13:11:51.482962] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:48.552 ************************************ 00:27:48.552 END TEST raid5f_state_function_test 00:27:48.552 ************************************ 00:27:48.552 13:11:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:27:48.552 00:27:48.552 real 0m16.259s 00:27:48.552 user 0m29.245s 00:27:48.552 sys 0m1.795s 00:27:48.552 13:11:52 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:27:48.552 13:11:52 -- common/autotest_common.sh@10 -- # set +x 00:27:48.552 13:11:52 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:27:48.552 13:11:52 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:27:48.552 13:11:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:48.552 13:11:52 -- common/autotest_common.sh@10 -- # set +x 00:27:48.811 ************************************ 00:27:48.811 START TEST raid5f_state_function_test_sb 00:27:48.811 ************************************ 00:27:48.811 13:11:52 -- common/autotest_common.sh@1099 -- # raid_state_function_test raid5f 4 true 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=138951 00:27:48.811 Process raid pid: 138951 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138951' 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:48.811 13:11:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138951 /var/tmp/spdk-raid.sock 00:27:48.811 13:11:52 -- common/autotest_common.sh@817 -- # '[' -z 138951 ']' 00:27:48.811 13:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:48.811 13:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:48.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:48.811 13:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:48.811 13:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:48.811 13:11:52 -- common/autotest_common.sh@10 -- # set +x 00:27:48.811 [2024-04-17 13:11:52.773557] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:27:48.811 [2024-04-17 13:11:52.773960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.811 [2024-04-17 13:11:52.941654] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.070 [2024-04-17 13:11:53.156227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.328 [2024-04-17 13:11:53.360866] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:49.587 13:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:49.587 13:11:53 -- common/autotest_common.sh@850 -- # return 0 00:27:49.587 13:11:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:49.846 [2024-04-17 13:11:53.948928] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:49.846 [2024-04-17 13:11:53.949270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:49.846 [2024-04-17 13:11:53.949417] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.846 [2024-04-17 13:11:53.949559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.846 [2024-04-17 13:11:53.949652] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.846 [2024-04-17 13:11:53.949728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.846 [2024-04-17 13:11:53.949861] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.846 [2024-04-17 13:11:53.949925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.846 13:11:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:50.105 13:11:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:50.105 "name": "Existed_Raid", 00:27:50.105 "uuid": "5183b1d7-1374-4f44-b24d-130975eff7ef", 00:27:50.105 "strip_size_kb": 64, 00:27:50.105 "state": "configuring", 00:27:50.105 "raid_level": "raid5f", 00:27:50.105 "superblock": true, 00:27:50.105 "num_base_bdevs": 4, 00:27:50.105 "num_base_bdevs_discovered": 0, 00:27:50.105 "num_base_bdevs_operational": 4, 00:27:50.105 "base_bdevs_list": [ 00:27:50.105 { 00:27:50.105 "name": "BaseBdev1", 00:27:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.105 "is_configured": false, 00:27:50.105 "data_offset": 0, 00:27:50.105 "data_size": 0 00:27:50.105 }, 00:27:50.105 { 00:27:50.105 "name": "BaseBdev2", 00:27:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.105 "is_configured": false, 00:27:50.105 "data_offset": 0, 00:27:50.105 "data_size": 0 00:27:50.105 }, 00:27:50.105 { 00:27:50.105 "name": "BaseBdev3", 00:27:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.105 "is_configured": false, 00:27:50.105 "data_offset": 0, 00:27:50.105 "data_size": 0 00:27:50.105 }, 00:27:50.105 { 00:27:50.105 "name": "BaseBdev4", 00:27:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.105 "is_configured": false, 00:27:50.105 "data_offset": 0, 00:27:50.105 "data_size": 0 00:27:50.105 } 00:27:50.105 ] 00:27:50.105 }' 00:27:50.105 13:11:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:50.105 13:11:54 -- common/autotest_common.sh@10 -- # set +x 00:27:51.040 13:11:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:51.040 [2024-04-17 13:11:55.156994] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:51.040 [2024-04-17 13:11:55.157263] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:27:51.040 13:11:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:51.299 [2024-04-17 13:11:55.389105] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:51.299 [2024-04-17 13:11:55.389433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:51.299 [2024-04-17 13:11:55.389547] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:51.299 [2024-04-17 13:11:55.389611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:51.299 [2024-04-17 13:11:55.389831] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:51.299 [2024-04-17 13:11:55.389907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:51.299 [2024-04-17 13:11:55.390085] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:51.299 [2024-04-17 13:11:55.390156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:51.299 13:11:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:51.561 [2024-04-17 13:11:55.655582] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:51.561 BaseBdev1 00:27:51.561 13:11:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:51.561 13:11:55 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:51.561 13:11:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:51.561 13:11:55 -- common/autotest_common.sh@887 -- # local i 00:27:51.561 13:11:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:51.561 13:11:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:51.561 13:11:55 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:51.823 13:11:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:52.082 [ 00:27:52.082 { 00:27:52.082 "name": "BaseBdev1", 00:27:52.082 "aliases": [ 00:27:52.082 "9dcc4e7f-ba36-479e-8abe-7b4091107c74" 00:27:52.082 ], 00:27:52.082 "product_name": "Malloc disk", 00:27:52.082 "block_size": 512, 00:27:52.082 "num_blocks": 65536, 00:27:52.082 "uuid": "9dcc4e7f-ba36-479e-8abe-7b4091107c74", 00:27:52.082 "assigned_rate_limits": { 00:27:52.082 "rw_ios_per_sec": 0, 00:27:52.082 "rw_mbytes_per_sec": 0, 00:27:52.082 "r_mbytes_per_sec": 0, 00:27:52.082 "w_mbytes_per_sec": 0 00:27:52.082 }, 00:27:52.082 "claimed": true, 00:27:52.082 "claim_type": "exclusive_write", 00:27:52.082 "zoned": false, 00:27:52.082 "supported_io_types": { 00:27:52.082 "read": true, 00:27:52.082 "write": true, 00:27:52.082 "unmap": true, 00:27:52.082 "write_zeroes": true, 00:27:52.082 "flush": true, 00:27:52.082 "reset": true, 00:27:52.082 "compare": false, 00:27:52.082 "compare_and_write": false, 00:27:52.082 "abort": true, 00:27:52.082 "nvme_admin": false, 00:27:52.082 "nvme_io": false 00:27:52.082 }, 00:27:52.082 "memory_domains": [ 00:27:52.082 { 00:27:52.082 "dma_device_id": "system", 00:27:52.082 "dma_device_type": 1 00:27:52.082 }, 00:27:52.082 { 00:27:52.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.082 "dma_device_type": 2 00:27:52.082 } 00:27:52.082 ], 00:27:52.082 "driver_specific": {} 00:27:52.082 } 00:27:52.082 ] 00:27:52.082 13:11:56 -- common/autotest_common.sh@893 -- # return 0 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.082 13:11:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.341 13:11:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:52.341 "name": "Existed_Raid", 00:27:52.341 "uuid": "01b4ee18-b8b6-4252-8700-565b89367987", 00:27:52.341 "strip_size_kb": 64, 00:27:52.341 "state": "configuring", 00:27:52.341 "raid_level": "raid5f", 00:27:52.341 "superblock": true, 00:27:52.341 "num_base_bdevs": 4, 00:27:52.341 "num_base_bdevs_discovered": 1, 00:27:52.341 "num_base_bdevs_operational": 4, 00:27:52.341 "base_bdevs_list": [ 00:27:52.341 { 00:27:52.341 "name": "BaseBdev1", 00:27:52.341 "uuid": "9dcc4e7f-ba36-479e-8abe-7b4091107c74", 00:27:52.341 "is_configured": true, 00:27:52.341 "data_offset": 2048, 00:27:52.341 "data_size": 63488 00:27:52.341 }, 00:27:52.341 { 00:27:52.341 "name": "BaseBdev2", 00:27:52.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.341 "is_configured": false, 00:27:52.341 "data_offset": 0, 00:27:52.341 "data_size": 0 00:27:52.341 }, 00:27:52.341 { 00:27:52.341 "name": "BaseBdev3", 00:27:52.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.341 "is_configured": false, 00:27:52.341 "data_offset": 0, 00:27:52.341 "data_size": 0 00:27:52.341 }, 00:27:52.341 { 00:27:52.341 "name": "BaseBdev4", 00:27:52.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.341 "is_configured": false, 00:27:52.341 "data_offset": 0, 00:27:52.341 "data_size": 0 00:27:52.341 } 00:27:52.341 ] 00:27:52.341 }' 00:27:52.341 13:11:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:52.341 13:11:56 -- common/autotest_common.sh@10 -- # set +x 00:27:53.278 13:11:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:53.278 [2024-04-17 13:11:57.364103] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:53.278 [2024-04-17 13:11:57.364383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:27:53.278 13:11:57 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:27:53.278 13:11:57 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:53.845 13:11:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:54.105 BaseBdev1 00:27:54.105 13:11:58 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:27:54.105 13:11:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:54.105 13:11:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:54.105 13:11:58 -- common/autotest_common.sh@887 -- # local i 00:27:54.105 13:11:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:54.105 13:11:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:54.105 13:11:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:54.364 13:11:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:54.364 [ 00:27:54.364 { 00:27:54.364 "name": "BaseBdev1", 00:27:54.364 "aliases": [ 00:27:54.364 "f0979776-4788-4d09-9a93-667c5f0942f5" 00:27:54.364 ], 00:27:54.364 "product_name": "Malloc disk", 00:27:54.364 "block_size": 512, 00:27:54.364 "num_blocks": 65536, 00:27:54.364 "uuid": "f0979776-4788-4d09-9a93-667c5f0942f5", 00:27:54.364 "assigned_rate_limits": { 00:27:54.364 "rw_ios_per_sec": 0, 00:27:54.364 "rw_mbytes_per_sec": 0, 00:27:54.364 "r_mbytes_per_sec": 0, 00:27:54.364 "w_mbytes_per_sec": 0 00:27:54.364 }, 00:27:54.364 "claimed": false, 00:27:54.364 "zoned": false, 00:27:54.364 "supported_io_types": { 00:27:54.364 "read": true, 00:27:54.364 "write": true, 00:27:54.364 "unmap": true, 00:27:54.364 "write_zeroes": true, 00:27:54.364 "flush": true, 00:27:54.364 "reset": true, 00:27:54.364 "compare": false, 00:27:54.364 "compare_and_write": false, 00:27:54.364 "abort": true, 00:27:54.364 "nvme_admin": false, 00:27:54.364 "nvme_io": false 00:27:54.364 }, 00:27:54.364 "memory_domains": [ 00:27:54.364 { 00:27:54.364 "dma_device_id": "system", 00:27:54.364 "dma_device_type": 1 00:27:54.364 }, 00:27:54.364 { 00:27:54.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.364 "dma_device_type": 2 00:27:54.364 } 00:27:54.364 ], 00:27:54.364 "driver_specific": {} 00:27:54.364 } 00:27:54.364 ] 00:27:54.364 13:11:58 -- common/autotest_common.sh@893 -- # return 0 00:27:54.364 13:11:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:54.623 [2024-04-17 13:11:58.728986] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:54.623 [2024-04-17 13:11:58.731343] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:54.623 [2024-04-17 13:11:58.731565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:54.623 [2024-04-17 13:11:58.731679] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:54.623 [2024-04-17 13:11:58.731741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:54.623 [2024-04-17 13:11:58.731993] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:54.623 [2024-04-17 13:11:58.732052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.623 13:11:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:55.191 13:11:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:55.191 "name": "Existed_Raid", 00:27:55.191 "uuid": "f65d4ee3-3d41-4d0c-bfab-a0f6cabae946", 00:27:55.191 "strip_size_kb": 64, 00:27:55.192 "state": "configuring", 00:27:55.192 "raid_level": "raid5f", 00:27:55.192 "superblock": true, 00:27:55.192 "num_base_bdevs": 4, 00:27:55.192 "num_base_bdevs_discovered": 1, 00:27:55.192 "num_base_bdevs_operational": 4, 00:27:55.192 "base_bdevs_list": [ 00:27:55.192 { 00:27:55.192 "name": "BaseBdev1", 00:27:55.192 "uuid": "f0979776-4788-4d09-9a93-667c5f0942f5", 00:27:55.192 "is_configured": true, 00:27:55.192 "data_offset": 2048, 00:27:55.192 "data_size": 63488 00:27:55.192 }, 00:27:55.192 { 00:27:55.192 "name": "BaseBdev2", 00:27:55.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.192 "is_configured": false, 00:27:55.192 "data_offset": 0, 00:27:55.192 "data_size": 0 00:27:55.192 }, 00:27:55.192 { 00:27:55.192 "name": "BaseBdev3", 00:27:55.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.192 "is_configured": false, 00:27:55.192 "data_offset": 0, 00:27:55.192 "data_size": 0 00:27:55.192 }, 00:27:55.192 { 00:27:55.192 "name": "BaseBdev4", 00:27:55.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.192 "is_configured": false, 00:27:55.192 "data_offset": 0, 00:27:55.192 "data_size": 0 00:27:55.192 } 00:27:55.192 ] 00:27:55.192 }' 00:27:55.192 13:11:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:55.192 13:11:59 -- common/autotest_common.sh@10 -- # set +x 00:27:55.759 13:11:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:56.018 [2024-04-17 13:12:00.063745] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:56.018 BaseBdev2 00:27:56.018 13:12:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:56.018 13:12:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:56.018 13:12:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:56.018 13:12:00 -- common/autotest_common.sh@887 -- # local i 00:27:56.018 13:12:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:56.018 13:12:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:56.018 13:12:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:56.276 13:12:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:56.535 [ 00:27:56.535 { 00:27:56.535 "name": "BaseBdev2", 00:27:56.535 "aliases": [ 00:27:56.535 "8dd369a2-17be-42d8-8a94-4f41a3de7508" 00:27:56.535 ], 00:27:56.535 "product_name": "Malloc disk", 00:27:56.535 "block_size": 512, 00:27:56.535 "num_blocks": 65536, 00:27:56.535 "uuid": "8dd369a2-17be-42d8-8a94-4f41a3de7508", 00:27:56.535 "assigned_rate_limits": { 00:27:56.535 "rw_ios_per_sec": 0, 00:27:56.535 "rw_mbytes_per_sec": 0, 00:27:56.535 "r_mbytes_per_sec": 0, 00:27:56.535 "w_mbytes_per_sec": 0 00:27:56.535 }, 00:27:56.535 "claimed": true, 00:27:56.535 "claim_type": "exclusive_write", 00:27:56.535 "zoned": false, 00:27:56.535 "supported_io_types": { 00:27:56.535 "read": true, 00:27:56.535 "write": true, 00:27:56.535 "unmap": true, 00:27:56.535 "write_zeroes": true, 00:27:56.535 "flush": true, 00:27:56.535 "reset": true, 00:27:56.535 "compare": false, 00:27:56.535 "compare_and_write": false, 00:27:56.535 "abort": true, 00:27:56.535 "nvme_admin": false, 00:27:56.535 "nvme_io": false 00:27:56.535 }, 00:27:56.535 "memory_domains": [ 00:27:56.535 { 00:27:56.535 "dma_device_id": "system", 00:27:56.535 "dma_device_type": 1 00:27:56.535 }, 00:27:56.535 { 00:27:56.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:56.535 "dma_device_type": 2 00:27:56.535 } 00:27:56.535 ], 00:27:56.535 "driver_specific": {} 00:27:56.535 } 00:27:56.535 ] 00:27:56.535 13:12:00 -- common/autotest_common.sh@893 -- # return 0 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.535 13:12:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.794 13:12:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:56.794 "name": "Existed_Raid", 00:27:56.794 "uuid": "f65d4ee3-3d41-4d0c-bfab-a0f6cabae946", 00:27:56.794 "strip_size_kb": 64, 00:27:56.794 "state": "configuring", 00:27:56.794 "raid_level": "raid5f", 00:27:56.794 "superblock": true, 00:27:56.794 "num_base_bdevs": 4, 00:27:56.794 "num_base_bdevs_discovered": 2, 00:27:56.794 "num_base_bdevs_operational": 4, 00:27:56.794 "base_bdevs_list": [ 00:27:56.794 { 00:27:56.794 "name": "BaseBdev1", 00:27:56.794 "uuid": "f0979776-4788-4d09-9a93-667c5f0942f5", 00:27:56.794 "is_configured": true, 00:27:56.794 "data_offset": 2048, 00:27:56.794 "data_size": 63488 00:27:56.794 }, 00:27:56.794 { 00:27:56.794 "name": "BaseBdev2", 00:27:56.794 "uuid": "8dd369a2-17be-42d8-8a94-4f41a3de7508", 00:27:56.794 "is_configured": true, 00:27:56.794 "data_offset": 2048, 00:27:56.794 "data_size": 63488 00:27:56.794 }, 00:27:56.794 { 00:27:56.794 "name": "BaseBdev3", 00:27:56.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.794 "is_configured": false, 00:27:56.794 "data_offset": 0, 00:27:56.794 "data_size": 0 00:27:56.794 }, 00:27:56.794 { 00:27:56.794 "name": "BaseBdev4", 00:27:56.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.794 "is_configured": false, 00:27:56.794 "data_offset": 0, 00:27:56.794 "data_size": 0 00:27:56.794 } 00:27:56.794 ] 00:27:56.794 }' 00:27:56.794 13:12:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:56.794 13:12:00 -- common/autotest_common.sh@10 -- # set +x 00:27:57.730 13:12:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:57.730 [2024-04-17 13:12:01.769981] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:57.730 BaseBdev3 00:27:57.730 13:12:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:57.730 13:12:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:57.730 13:12:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:57.730 13:12:01 -- common/autotest_common.sh@887 -- # local i 00:27:57.730 13:12:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:57.730 13:12:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:57.730 13:12:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:57.989 13:12:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:58.247 [ 00:27:58.247 { 00:27:58.247 "name": "BaseBdev3", 00:27:58.247 "aliases": [ 00:27:58.247 "46df91be-02f3-4026-84f6-5765a4bb803f" 00:27:58.247 ], 00:27:58.247 "product_name": "Malloc disk", 00:27:58.247 "block_size": 512, 00:27:58.247 "num_blocks": 65536, 00:27:58.247 "uuid": "46df91be-02f3-4026-84f6-5765a4bb803f", 00:27:58.247 "assigned_rate_limits": { 00:27:58.247 "rw_ios_per_sec": 0, 00:27:58.247 "rw_mbytes_per_sec": 0, 00:27:58.247 "r_mbytes_per_sec": 0, 00:27:58.247 "w_mbytes_per_sec": 0 00:27:58.247 }, 00:27:58.247 "claimed": true, 00:27:58.247 "claim_type": "exclusive_write", 00:27:58.247 "zoned": false, 00:27:58.247 "supported_io_types": { 00:27:58.247 "read": true, 00:27:58.247 "write": true, 00:27:58.247 "unmap": true, 00:27:58.247 "write_zeroes": true, 00:27:58.247 "flush": true, 00:27:58.247 "reset": true, 00:27:58.247 "compare": false, 00:27:58.247 "compare_and_write": false, 00:27:58.247 "abort": true, 00:27:58.247 "nvme_admin": false, 00:27:58.247 "nvme_io": false 00:27:58.247 }, 00:27:58.247 "memory_domains": [ 00:27:58.247 { 00:27:58.247 "dma_device_id": "system", 00:27:58.247 "dma_device_type": 1 00:27:58.247 }, 00:27:58.247 { 00:27:58.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.247 "dma_device_type": 2 00:27:58.247 } 00:27:58.247 ], 00:27:58.247 "driver_specific": {} 00:27:58.247 } 00:27:58.247 ] 00:27:58.247 13:12:02 -- common/autotest_common.sh@893 -- # return 0 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.247 13:12:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.506 13:12:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:58.506 "name": "Existed_Raid", 00:27:58.506 "uuid": "f65d4ee3-3d41-4d0c-bfab-a0f6cabae946", 00:27:58.506 "strip_size_kb": 64, 00:27:58.506 "state": "configuring", 00:27:58.506 "raid_level": "raid5f", 00:27:58.506 "superblock": true, 00:27:58.506 "num_base_bdevs": 4, 00:27:58.506 "num_base_bdevs_discovered": 3, 00:27:58.506 "num_base_bdevs_operational": 4, 00:27:58.506 "base_bdevs_list": [ 00:27:58.506 { 00:27:58.506 "name": "BaseBdev1", 00:27:58.506 "uuid": "f0979776-4788-4d09-9a93-667c5f0942f5", 00:27:58.506 "is_configured": true, 00:27:58.506 "data_offset": 2048, 00:27:58.506 "data_size": 63488 00:27:58.506 }, 00:27:58.506 { 00:27:58.506 "name": "BaseBdev2", 00:27:58.506 "uuid": "8dd369a2-17be-42d8-8a94-4f41a3de7508", 00:27:58.506 "is_configured": true, 00:27:58.506 "data_offset": 2048, 00:27:58.506 "data_size": 63488 00:27:58.506 }, 00:27:58.506 { 00:27:58.506 "name": "BaseBdev3", 00:27:58.506 "uuid": "46df91be-02f3-4026-84f6-5765a4bb803f", 00:27:58.506 "is_configured": true, 00:27:58.506 "data_offset": 2048, 00:27:58.506 "data_size": 63488 00:27:58.506 }, 00:27:58.506 { 00:27:58.506 "name": "BaseBdev4", 00:27:58.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.506 "is_configured": false, 00:27:58.506 "data_offset": 0, 00:27:58.506 "data_size": 0 00:27:58.506 } 00:27:58.506 ] 00:27:58.506 }' 00:27:58.506 13:12:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:58.506 13:12:02 -- common/autotest_common.sh@10 -- # set +x 00:27:59.073 13:12:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:59.640 [2024-04-17 13:12:03.494413] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:59.640 [2024-04-17 13:12:03.494967] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:27:59.640 [2024-04-17 13:12:03.495122] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:59.640 [2024-04-17 13:12:03.495389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:27:59.640 BaseBdev4 00:27:59.640 [2024-04-17 13:12:03.502800] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:27:59.640 [2024-04-17 13:12:03.502976] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:27:59.640 [2024-04-17 13:12:03.503301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.640 13:12:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:59.640 13:12:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:27:59.640 13:12:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:59.640 13:12:03 -- common/autotest_common.sh@887 -- # local i 00:27:59.640 13:12:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:59.640 13:12:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:59.640 13:12:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:59.640 13:12:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:59.900 [ 00:27:59.900 { 00:27:59.900 "name": "BaseBdev4", 00:27:59.900 "aliases": [ 00:27:59.900 "412ac6f3-83b2-4a7c-9faf-27ede7582f23" 00:27:59.900 ], 00:27:59.900 "product_name": "Malloc disk", 00:27:59.900 "block_size": 512, 00:27:59.900 "num_blocks": 65536, 00:27:59.900 "uuid": "412ac6f3-83b2-4a7c-9faf-27ede7582f23", 00:27:59.900 "assigned_rate_limits": { 00:27:59.900 "rw_ios_per_sec": 0, 00:27:59.900 "rw_mbytes_per_sec": 0, 00:27:59.900 "r_mbytes_per_sec": 0, 00:27:59.900 "w_mbytes_per_sec": 0 00:27:59.900 }, 00:27:59.900 "claimed": true, 00:27:59.900 "claim_type": "exclusive_write", 00:27:59.900 "zoned": false, 00:27:59.900 "supported_io_types": { 00:27:59.900 "read": true, 00:27:59.900 "write": true, 00:27:59.900 "unmap": true, 00:27:59.900 "write_zeroes": true, 00:27:59.900 "flush": true, 00:27:59.900 "reset": true, 00:27:59.900 "compare": false, 00:27:59.900 "compare_and_write": false, 00:27:59.900 "abort": true, 00:27:59.900 "nvme_admin": false, 00:27:59.900 "nvme_io": false 00:27:59.900 }, 00:27:59.900 "memory_domains": [ 00:27:59.900 { 00:27:59.900 "dma_device_id": "system", 00:27:59.900 "dma_device_type": 1 00:27:59.900 }, 00:27:59.900 { 00:27:59.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.900 "dma_device_type": 2 00:27:59.900 } 00:27:59.900 ], 00:27:59.900 "driver_specific": {} 00:27:59.900 } 00:27:59.900 ] 00:27:59.900 13:12:03 -- common/autotest_common.sh@893 -- # return 0 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.900 13:12:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:00.159 13:12:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:00.159 "name": "Existed_Raid", 00:28:00.159 "uuid": "f65d4ee3-3d41-4d0c-bfab-a0f6cabae946", 00:28:00.159 "strip_size_kb": 64, 00:28:00.159 "state": "online", 00:28:00.159 "raid_level": "raid5f", 00:28:00.159 "superblock": true, 00:28:00.159 "num_base_bdevs": 4, 00:28:00.159 "num_base_bdevs_discovered": 4, 00:28:00.159 "num_base_bdevs_operational": 4, 00:28:00.159 "base_bdevs_list": [ 00:28:00.159 { 00:28:00.159 "name": "BaseBdev1", 00:28:00.159 "uuid": "f0979776-4788-4d09-9a93-667c5f0942f5", 00:28:00.159 "is_configured": true, 00:28:00.159 "data_offset": 2048, 00:28:00.159 "data_size": 63488 00:28:00.159 }, 00:28:00.159 { 00:28:00.159 "name": "BaseBdev2", 00:28:00.159 "uuid": "8dd369a2-17be-42d8-8a94-4f41a3de7508", 00:28:00.159 "is_configured": true, 00:28:00.159 "data_offset": 2048, 00:28:00.159 "data_size": 63488 00:28:00.159 }, 00:28:00.159 { 00:28:00.159 "name": "BaseBdev3", 00:28:00.159 "uuid": "46df91be-02f3-4026-84f6-5765a4bb803f", 00:28:00.159 "is_configured": true, 00:28:00.159 "data_offset": 2048, 00:28:00.159 "data_size": 63488 00:28:00.159 }, 00:28:00.159 { 00:28:00.159 "name": "BaseBdev4", 00:28:00.159 "uuid": "412ac6f3-83b2-4a7c-9faf-27ede7582f23", 00:28:00.159 "is_configured": true, 00:28:00.159 "data_offset": 2048, 00:28:00.159 "data_size": 63488 00:28:00.159 } 00:28:00.159 ] 00:28:00.159 }' 00:28:00.159 13:12:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:00.159 13:12:04 -- common/autotest_common.sh@10 -- # set +x 00:28:01.124 13:12:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:01.393 [2024-04-17 13:12:05.271545] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.393 13:12:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.652 13:12:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:01.652 "name": "Existed_Raid", 00:28:01.652 "uuid": "f65d4ee3-3d41-4d0c-bfab-a0f6cabae946", 00:28:01.652 "strip_size_kb": 64, 00:28:01.652 "state": "online", 00:28:01.652 "raid_level": "raid5f", 00:28:01.652 "superblock": true, 00:28:01.652 "num_base_bdevs": 4, 00:28:01.652 "num_base_bdevs_discovered": 3, 00:28:01.652 "num_base_bdevs_operational": 3, 00:28:01.652 "base_bdevs_list": [ 00:28:01.652 { 00:28:01.652 "name": null, 00:28:01.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.652 "is_configured": false, 00:28:01.652 "data_offset": 2048, 00:28:01.652 "data_size": 63488 00:28:01.652 }, 00:28:01.652 { 00:28:01.652 "name": "BaseBdev2", 00:28:01.652 "uuid": "8dd369a2-17be-42d8-8a94-4f41a3de7508", 00:28:01.652 "is_configured": true, 00:28:01.652 "data_offset": 2048, 00:28:01.652 "data_size": 63488 00:28:01.652 }, 00:28:01.652 { 00:28:01.652 "name": "BaseBdev3", 00:28:01.652 "uuid": "46df91be-02f3-4026-84f6-5765a4bb803f", 00:28:01.652 "is_configured": true, 00:28:01.652 "data_offset": 2048, 00:28:01.652 "data_size": 63488 00:28:01.652 }, 00:28:01.652 { 00:28:01.652 "name": "BaseBdev4", 00:28:01.652 "uuid": "412ac6f3-83b2-4a7c-9faf-27ede7582f23", 00:28:01.652 "is_configured": true, 00:28:01.652 "data_offset": 2048, 00:28:01.652 "data_size": 63488 00:28:01.652 } 00:28:01.652 ] 00:28:01.652 }' 00:28:01.652 13:12:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:01.652 13:12:05 -- common/autotest_common.sh@10 -- # set +x 00:28:02.220 13:12:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:28:02.220 13:12:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:02.220 13:12:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.220 13:12:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:02.480 13:12:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:02.480 13:12:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:02.480 13:12:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:02.739 [2024-04-17 13:12:06.846736] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:02.739 [2024-04-17 13:12:06.847143] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:02.998 [2024-04-17 13:12:06.931822] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:02.999 13:12:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:02.999 13:12:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:02.999 13:12:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:02.999 13:12:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.258 13:12:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:03.258 13:12:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:03.258 13:12:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:03.517 [2024-04-17 13:12:07.568184] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:03.775 13:12:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:03.775 13:12:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:03.775 13:12:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.775 13:12:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:04.033 13:12:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:04.033 13:12:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:04.033 13:12:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:04.314 [2024-04-17 13:12:08.301019] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:04.314 [2024-04-17 13:12:08.301382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:28:04.314 13:12:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:04.315 13:12:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:04.315 13:12:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.315 13:12:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:28:04.573 13:12:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:28:04.573 13:12:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:28:04.573 13:12:08 -- bdev/bdev_raid.sh@287 -- # killprocess 138951 00:28:04.573 13:12:08 -- common/autotest_common.sh@924 -- # '[' -z 138951 ']' 00:28:04.573 13:12:08 -- common/autotest_common.sh@928 -- # kill -0 138951 00:28:04.573 13:12:08 -- common/autotest_common.sh@929 -- # uname 00:28:04.573 13:12:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:04.573 13:12:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 138951 00:28:04.573 killing process with pid 138951 00:28:04.573 13:12:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:04.573 13:12:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:04.573 13:12:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 138951' 00:28:04.573 13:12:08 -- common/autotest_common.sh@943 -- # kill 138951 00:28:04.573 13:12:08 -- common/autotest_common.sh@948 -- # wait 138951 00:28:04.573 [2024-04-17 13:12:08.680837] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:04.573 [2024-04-17 13:12:08.681016] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:05.954 ************************************ 00:28:05.955 END TEST raid5f_state_function_test_sb 00:28:05.955 ************************************ 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:28:05.955 00:28:05.955 real 0m17.135s 00:28:05.955 user 0m30.788s 00:28:05.955 sys 0m1.887s 00:28:05.955 13:12:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:28:05.955 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:28:05.955 13:12:09 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:28:05.955 13:12:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:05.955 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:28:05.955 ************************************ 00:28:05.955 START TEST raid5f_superblock_test 00:28:05.955 ************************************ 00:28:05.955 13:12:09 -- common/autotest_common.sh@1099 -- # raid_superblock_test raid5f 4 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=139458 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:05.955 13:12:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 139458 /var/tmp/spdk-raid.sock 00:28:05.955 13:12:09 -- common/autotest_common.sh@817 -- # '[' -z 139458 ']' 00:28:05.955 13:12:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:05.955 13:12:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:05.955 13:12:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:05.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:05.955 13:12:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:05.955 13:12:09 -- common/autotest_common.sh@10 -- # set +x 00:28:05.955 [2024-04-17 13:12:09.977834] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:28:05.955 [2024-04-17 13:12:09.978239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139458 ] 00:28:06.215 [2024-04-17 13:12:10.131403] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.215 [2024-04-17 13:12:10.360200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.476 [2024-04-17 13:12:10.560439] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:07.044 13:12:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:07.044 13:12:10 -- common/autotest_common.sh@850 -- # return 0 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:07.044 13:12:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:07.304 malloc1 00:28:07.304 13:12:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:07.563 [2024-04-17 13:12:11.503390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:07.563 [2024-04-17 13:12:11.503760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:07.563 [2024-04-17 13:12:11.503931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:07.563 [2024-04-17 13:12:11.504185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:07.563 [2024-04-17 13:12:11.506843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:07.563 [2024-04-17 13:12:11.507005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:07.563 pt1 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:07.563 13:12:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:07.831 malloc2 00:28:07.831 13:12:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:08.090 [2024-04-17 13:12:12.107399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:08.090 [2024-04-17 13:12:12.107773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.090 [2024-04-17 13:12:12.107964] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:08.090 [2024-04-17 13:12:12.108157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.090 [2024-04-17 13:12:12.110883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.090 [2024-04-17 13:12:12.111055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:08.090 pt2 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:08.090 13:12:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:08.355 malloc3 00:28:08.355 13:12:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:08.617 [2024-04-17 13:12:12.670402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:08.617 [2024-04-17 13:12:12.670698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.617 [2024-04-17 13:12:12.670854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:08.617 [2024-04-17 13:12:12.670996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.617 [2024-04-17 13:12:12.673674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.617 [2024-04-17 13:12:12.673846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:08.617 pt3 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:08.617 13:12:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:08.876 malloc4 00:28:08.876 13:12:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:09.145 [2024-04-17 13:12:13.153523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:09.145 [2024-04-17 13:12:13.153896] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:09.145 [2024-04-17 13:12:13.153978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:09.145 [2024-04-17 13:12:13.154244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:09.145 [2024-04-17 13:12:13.156874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:09.145 [2024-04-17 13:12:13.157044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:09.145 pt4 00:28:09.145 13:12:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:09.145 13:12:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:09.145 13:12:13 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:09.404 [2024-04-17 13:12:13.381659] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:09.405 [2024-04-17 13:12:13.384044] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:09.405 [2024-04-17 13:12:13.384241] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:09.405 [2024-04-17 13:12:13.384374] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:09.405 [2024-04-17 13:12:13.384688] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:09.405 [2024-04-17 13:12:13.384806] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:09.405 [2024-04-17 13:12:13.384979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:09.405 [2024-04-17 13:12:13.391919] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:09.405 [2024-04-17 13:12:13.392055] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:09.405 [2024-04-17 13:12:13.392362] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.405 13:12:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.676 13:12:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:09.676 "name": "raid_bdev1", 00:28:09.676 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:09.676 "strip_size_kb": 64, 00:28:09.676 "state": "online", 00:28:09.676 "raid_level": "raid5f", 00:28:09.676 "superblock": true, 00:28:09.676 "num_base_bdevs": 4, 00:28:09.676 "num_base_bdevs_discovered": 4, 00:28:09.676 "num_base_bdevs_operational": 4, 00:28:09.676 "base_bdevs_list": [ 00:28:09.676 { 00:28:09.676 "name": "pt1", 00:28:09.676 "uuid": "0f76869b-ae1b-54b1-a9db-66a7365a3186", 00:28:09.676 "is_configured": true, 00:28:09.676 "data_offset": 2048, 00:28:09.676 "data_size": 63488 00:28:09.676 }, 00:28:09.676 { 00:28:09.676 "name": "pt2", 00:28:09.676 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:09.676 "is_configured": true, 00:28:09.676 "data_offset": 2048, 00:28:09.676 "data_size": 63488 00:28:09.676 }, 00:28:09.676 { 00:28:09.676 "name": "pt3", 00:28:09.676 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:09.676 "is_configured": true, 00:28:09.676 "data_offset": 2048, 00:28:09.676 "data_size": 63488 00:28:09.676 }, 00:28:09.676 { 00:28:09.676 "name": "pt4", 00:28:09.676 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:09.676 "is_configured": true, 00:28:09.676 "data_offset": 2048, 00:28:09.676 "data_size": 63488 00:28:09.676 } 00:28:09.676 ] 00:28:09.676 }' 00:28:09.676 13:12:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:09.676 13:12:13 -- common/autotest_common.sh@10 -- # set +x 00:28:10.255 13:12:14 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:10.255 13:12:14 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:28:10.513 [2024-04-17 13:12:14.544118] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:10.513 13:12:14 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cc531de7-26c6-4bdf-a1ca-4dbed1306156 00:28:10.513 13:12:14 -- bdev/bdev_raid.sh@380 -- # '[' -z cc531de7-26c6-4bdf-a1ca-4dbed1306156 ']' 00:28:10.513 13:12:14 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:10.772 [2024-04-17 13:12:14.800003] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:10.772 [2024-04-17 13:12:14.800227] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:10.772 [2024-04-17 13:12:14.800407] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:10.772 [2024-04-17 13:12:14.800605] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:10.772 [2024-04-17 13:12:14.800726] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:10.772 13:12:14 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:28:10.772 13:12:14 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.031 13:12:15 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:28:11.031 13:12:15 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:28:11.031 13:12:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.031 13:12:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:11.290 13:12:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.290 13:12:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:11.549 13:12:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.549 13:12:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:11.807 13:12:15 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:11.807 13:12:15 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:12.066 13:12:16 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:12.066 13:12:16 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:12.325 13:12:16 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:28:12.325 13:12:16 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:12.325 13:12:16 -- common/autotest_common.sh@638 -- # local es=0 00:28:12.325 13:12:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:12.325 13:12:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.325 13:12:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.325 13:12:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.325 13:12:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.325 13:12:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.325 13:12:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.325 13:12:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:12.325 13:12:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:12.325 13:12:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:12.603 [2024-04-17 13:12:16.604409] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:12.603 [2024-04-17 13:12:16.606789] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:12.603 [2024-04-17 13:12:16.606991] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:12.603 [2024-04-17 13:12:16.607157] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:12.603 [2024-04-17 13:12:16.607354] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:28:12.603 [2024-04-17 13:12:16.607554] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:28:12.603 [2024-04-17 13:12:16.607706] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:28:12.603 [2024-04-17 13:12:16.607896] bdev_raid.c:2995:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:28:12.603 [2024-04-17 13:12:16.608032] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:12.603 [2024-04-17 13:12:16.608074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:28:12.603 request: 00:28:12.603 { 00:28:12.603 "name": "raid_bdev1", 00:28:12.603 "raid_level": "raid5f", 00:28:12.603 "base_bdevs": [ 00:28:12.603 "malloc1", 00:28:12.603 "malloc2", 00:28:12.603 "malloc3", 00:28:12.603 "malloc4" 00:28:12.603 ], 00:28:12.603 "superblock": false, 00:28:12.603 "strip_size_kb": 64, 00:28:12.603 "method": "bdev_raid_create", 00:28:12.603 "req_id": 1 00:28:12.603 } 00:28:12.603 Got JSON-RPC error response 00:28:12.603 response: 00:28:12.603 { 00:28:12.603 "code": -17, 00:28:12.603 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:12.603 } 00:28:12.603 13:12:16 -- common/autotest_common.sh@641 -- # es=1 00:28:12.603 13:12:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:12.603 13:12:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:12.603 13:12:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:12.603 13:12:16 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.603 13:12:16 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:28:12.879 13:12:16 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:28:12.879 13:12:16 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:28:12.879 13:12:16 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:13.138 [2024-04-17 13:12:17.112531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:13.138 [2024-04-17 13:12:17.112816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.138 [2024-04-17 13:12:17.112886] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:13.138 [2024-04-17 13:12:17.113076] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.138 [2024-04-17 13:12:17.115609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.138 [2024-04-17 13:12:17.115794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:13.138 [2024-04-17 13:12:17.116095] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:13.138 [2024-04-17 13:12:17.116178] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:13.138 pt1 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:13.138 13:12:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:13.139 13:12:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:13.139 13:12:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:13.139 13:12:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.139 13:12:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.397 13:12:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:13.397 "name": "raid_bdev1", 00:28:13.397 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:13.397 "strip_size_kb": 64, 00:28:13.397 "state": "configuring", 00:28:13.397 "raid_level": "raid5f", 00:28:13.397 "superblock": true, 00:28:13.397 "num_base_bdevs": 4, 00:28:13.397 "num_base_bdevs_discovered": 1, 00:28:13.397 "num_base_bdevs_operational": 4, 00:28:13.397 "base_bdevs_list": [ 00:28:13.397 { 00:28:13.397 "name": "pt1", 00:28:13.397 "uuid": "0f76869b-ae1b-54b1-a9db-66a7365a3186", 00:28:13.397 "is_configured": true, 00:28:13.397 "data_offset": 2048, 00:28:13.397 "data_size": 63488 00:28:13.397 }, 00:28:13.397 { 00:28:13.397 "name": null, 00:28:13.397 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:13.397 "is_configured": false, 00:28:13.397 "data_offset": 2048, 00:28:13.397 "data_size": 63488 00:28:13.397 }, 00:28:13.397 { 00:28:13.397 "name": null, 00:28:13.397 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:13.397 "is_configured": false, 00:28:13.397 "data_offset": 2048, 00:28:13.397 "data_size": 63488 00:28:13.397 }, 00:28:13.397 { 00:28:13.397 "name": null, 00:28:13.397 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:13.397 "is_configured": false, 00:28:13.397 "data_offset": 2048, 00:28:13.397 "data_size": 63488 00:28:13.397 } 00:28:13.397 ] 00:28:13.397 }' 00:28:13.397 13:12:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:13.397 13:12:17 -- common/autotest_common.sh@10 -- # set +x 00:28:14.333 13:12:18 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:28:14.333 13:12:18 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:14.333 [2024-04-17 13:12:18.384874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:14.333 [2024-04-17 13:12:18.385226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.333 [2024-04-17 13:12:18.385386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:14.333 [2024-04-17 13:12:18.385544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.333 [2024-04-17 13:12:18.386214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.333 [2024-04-17 13:12:18.386389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:14.333 [2024-04-17 13:12:18.386609] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:14.333 [2024-04-17 13:12:18.386753] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.333 pt2 00:28:14.333 13:12:18 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:14.591 [2024-04-17 13:12:18.656955] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.591 13:12:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.850 13:12:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:14.850 "name": "raid_bdev1", 00:28:14.850 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:14.850 "strip_size_kb": 64, 00:28:14.850 "state": "configuring", 00:28:14.850 "raid_level": "raid5f", 00:28:14.850 "superblock": true, 00:28:14.850 "num_base_bdevs": 4, 00:28:14.850 "num_base_bdevs_discovered": 1, 00:28:14.850 "num_base_bdevs_operational": 4, 00:28:14.850 "base_bdevs_list": [ 00:28:14.850 { 00:28:14.850 "name": "pt1", 00:28:14.850 "uuid": "0f76869b-ae1b-54b1-a9db-66a7365a3186", 00:28:14.850 "is_configured": true, 00:28:14.850 "data_offset": 2048, 00:28:14.850 "data_size": 63488 00:28:14.850 }, 00:28:14.850 { 00:28:14.850 "name": null, 00:28:14.850 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:14.850 "is_configured": false, 00:28:14.850 "data_offset": 2048, 00:28:14.850 "data_size": 63488 00:28:14.850 }, 00:28:14.850 { 00:28:14.850 "name": null, 00:28:14.850 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:14.850 "is_configured": false, 00:28:14.850 "data_offset": 2048, 00:28:14.850 "data_size": 63488 00:28:14.850 }, 00:28:14.850 { 00:28:14.850 "name": null, 00:28:14.850 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:14.850 "is_configured": false, 00:28:14.850 "data_offset": 2048, 00:28:14.850 "data_size": 63488 00:28:14.850 } 00:28:14.850 ] 00:28:14.850 }' 00:28:14.850 13:12:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:14.850 13:12:18 -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:15.784 [2024-04-17 13:12:19.897297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:15.784 [2024-04-17 13:12:19.897419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:15.784 [2024-04-17 13:12:19.897465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:15.784 [2024-04-17 13:12:19.897491] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:15.784 [2024-04-17 13:12:19.898027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:15.784 [2024-04-17 13:12:19.898085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:15.784 [2024-04-17 13:12:19.898196] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:15.784 [2024-04-17 13:12:19.898238] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:15.784 pt2 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:15.784 13:12:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:16.363 [2024-04-17 13:12:20.213361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:16.363 [2024-04-17 13:12:20.213467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.363 [2024-04-17 13:12:20.213506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:16.363 [2024-04-17 13:12:20.213547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.363 [2024-04-17 13:12:20.214091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.363 [2024-04-17 13:12:20.214163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:16.363 [2024-04-17 13:12:20.214275] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:16.363 [2024-04-17 13:12:20.214305] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:16.363 pt3 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:16.363 [2024-04-17 13:12:20.465429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:16.363 [2024-04-17 13:12:20.465559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.363 [2024-04-17 13:12:20.465602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:16.363 [2024-04-17 13:12:20.465632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.363 [2024-04-17 13:12:20.466144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.363 [2024-04-17 13:12:20.466209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:16.363 [2024-04-17 13:12:20.466324] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:16.363 [2024-04-17 13:12:20.466355] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:16.363 [2024-04-17 13:12:20.466525] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:28:16.363 [2024-04-17 13:12:20.466542] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:16.363 [2024-04-17 13:12:20.466650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:16.363 [2024-04-17 13:12:20.473083] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:28:16.363 [2024-04-17 13:12:20.473117] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:28:16.363 [2024-04-17 13:12:20.473325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.363 pt4 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.363 13:12:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.622 13:12:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:16.622 "name": "raid_bdev1", 00:28:16.622 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:16.622 "strip_size_kb": 64, 00:28:16.622 "state": "online", 00:28:16.622 "raid_level": "raid5f", 00:28:16.622 "superblock": true, 00:28:16.622 "num_base_bdevs": 4, 00:28:16.622 "num_base_bdevs_discovered": 4, 00:28:16.622 "num_base_bdevs_operational": 4, 00:28:16.622 "base_bdevs_list": [ 00:28:16.622 { 00:28:16.622 "name": "pt1", 00:28:16.622 "uuid": "0f76869b-ae1b-54b1-a9db-66a7365a3186", 00:28:16.622 "is_configured": true, 00:28:16.622 "data_offset": 2048, 00:28:16.622 "data_size": 63488 00:28:16.622 }, 00:28:16.622 { 00:28:16.622 "name": "pt2", 00:28:16.622 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:16.622 "is_configured": true, 00:28:16.622 "data_offset": 2048, 00:28:16.622 "data_size": 63488 00:28:16.622 }, 00:28:16.622 { 00:28:16.622 "name": "pt3", 00:28:16.622 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:16.622 "is_configured": true, 00:28:16.622 "data_offset": 2048, 00:28:16.622 "data_size": 63488 00:28:16.622 }, 00:28:16.622 { 00:28:16.622 "name": "pt4", 00:28:16.622 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:16.622 "is_configured": true, 00:28:16.622 "data_offset": 2048, 00:28:16.622 "data_size": 63488 00:28:16.622 } 00:28:16.622 ] 00:28:16.622 }' 00:28:16.622 13:12:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:16.622 13:12:20 -- common/autotest_common.sh@10 -- # set +x 00:28:17.555 13:12:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:17.555 13:12:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:28:17.812 [2024-04-17 13:12:21.701038] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:17.812 13:12:21 -- bdev/bdev_raid.sh@430 -- # '[' cc531de7-26c6-4bdf-a1ca-4dbed1306156 '!=' cc531de7-26c6-4bdf-a1ca-4dbed1306156 ']' 00:28:17.812 13:12:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:28:17.812 13:12:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:17.812 13:12:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:17.812 13:12:21 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:18.071 [2024-04-17 13:12:21.980996] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.071 13:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.329 13:12:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:18.329 "name": "raid_bdev1", 00:28:18.329 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:18.329 "strip_size_kb": 64, 00:28:18.329 "state": "online", 00:28:18.329 "raid_level": "raid5f", 00:28:18.329 "superblock": true, 00:28:18.329 "num_base_bdevs": 4, 00:28:18.329 "num_base_bdevs_discovered": 3, 00:28:18.329 "num_base_bdevs_operational": 3, 00:28:18.329 "base_bdevs_list": [ 00:28:18.329 { 00:28:18.329 "name": null, 00:28:18.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.329 "is_configured": false, 00:28:18.329 "data_offset": 2048, 00:28:18.329 "data_size": 63488 00:28:18.329 }, 00:28:18.329 { 00:28:18.329 "name": "pt2", 00:28:18.329 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:18.329 "is_configured": true, 00:28:18.329 "data_offset": 2048, 00:28:18.329 "data_size": 63488 00:28:18.330 }, 00:28:18.330 { 00:28:18.330 "name": "pt3", 00:28:18.330 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:18.330 "is_configured": true, 00:28:18.330 "data_offset": 2048, 00:28:18.330 "data_size": 63488 00:28:18.330 }, 00:28:18.330 { 00:28:18.330 "name": "pt4", 00:28:18.330 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:18.330 "is_configured": true, 00:28:18.330 "data_offset": 2048, 00:28:18.330 "data_size": 63488 00:28:18.330 } 00:28:18.330 ] 00:28:18.330 }' 00:28:18.330 13:12:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:18.330 13:12:22 -- common/autotest_common.sh@10 -- # set +x 00:28:18.897 13:12:22 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:19.156 [2024-04-17 13:12:23.177213] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:19.156 [2024-04-17 13:12:23.177260] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:19.156 [2024-04-17 13:12:23.177344] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:19.156 [2024-04-17 13:12:23.177430] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:19.156 [2024-04-17 13:12:23.177443] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:28:19.156 13:12:23 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.156 13:12:23 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:28:19.477 13:12:23 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:28:19.477 13:12:23 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:28:19.477 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:28:19.477 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:19.477 13:12:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:19.736 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:19.736 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:19.736 13:12:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:19.994 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:19.994 13:12:23 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:19.994 13:12:23 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:20.253 13:12:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:20.253 13:12:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:20.253 13:12:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:28:20.253 13:12:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:20.253 13:12:24 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:20.511 [2024-04-17 13:12:24.445450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:20.511 [2024-04-17 13:12:24.445563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:20.511 [2024-04-17 13:12:24.445610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:28:20.512 [2024-04-17 13:12:24.445648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:20.512 [2024-04-17 13:12:24.448232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:20.512 [2024-04-17 13:12:24.448317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:20.512 [2024-04-17 13:12:24.448450] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:20.512 [2024-04-17 13:12:24.448506] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:20.512 pt2 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.512 13:12:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.770 13:12:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:20.770 "name": "raid_bdev1", 00:28:20.770 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:20.770 "strip_size_kb": 64, 00:28:20.770 "state": "configuring", 00:28:20.770 "raid_level": "raid5f", 00:28:20.770 "superblock": true, 00:28:20.770 "num_base_bdevs": 4, 00:28:20.770 "num_base_bdevs_discovered": 1, 00:28:20.770 "num_base_bdevs_operational": 3, 00:28:20.770 "base_bdevs_list": [ 00:28:20.770 { 00:28:20.770 "name": null, 00:28:20.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.770 "is_configured": false, 00:28:20.770 "data_offset": 2048, 00:28:20.770 "data_size": 63488 00:28:20.770 }, 00:28:20.770 { 00:28:20.770 "name": "pt2", 00:28:20.770 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:20.770 "is_configured": true, 00:28:20.770 "data_offset": 2048, 00:28:20.770 "data_size": 63488 00:28:20.770 }, 00:28:20.770 { 00:28:20.770 "name": null, 00:28:20.770 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:20.770 "is_configured": false, 00:28:20.770 "data_offset": 2048, 00:28:20.770 "data_size": 63488 00:28:20.770 }, 00:28:20.770 { 00:28:20.770 "name": null, 00:28:20.770 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:20.770 "is_configured": false, 00:28:20.770 "data_offset": 2048, 00:28:20.770 "data_size": 63488 00:28:20.770 } 00:28:20.770 ] 00:28:20.770 }' 00:28:20.770 13:12:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:20.770 13:12:24 -- common/autotest_common.sh@10 -- # set +x 00:28:21.337 13:12:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:21.337 13:12:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:21.337 13:12:25 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:21.595 [2024-04-17 13:12:25.645726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:21.595 [2024-04-17 13:12:25.645832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.595 [2024-04-17 13:12:25.645885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:21.595 [2024-04-17 13:12:25.645919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.595 [2024-04-17 13:12:25.646433] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.595 [2024-04-17 13:12:25.646507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:21.595 [2024-04-17 13:12:25.646622] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:21.595 [2024-04-17 13:12:25.646652] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:21.595 pt3 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.595 13:12:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.853 13:12:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:21.853 "name": "raid_bdev1", 00:28:21.853 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:21.853 "strip_size_kb": 64, 00:28:21.853 "state": "configuring", 00:28:21.853 "raid_level": "raid5f", 00:28:21.853 "superblock": true, 00:28:21.853 "num_base_bdevs": 4, 00:28:21.853 "num_base_bdevs_discovered": 2, 00:28:21.853 "num_base_bdevs_operational": 3, 00:28:21.853 "base_bdevs_list": [ 00:28:21.853 { 00:28:21.853 "name": null, 00:28:21.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.853 "is_configured": false, 00:28:21.853 "data_offset": 2048, 00:28:21.853 "data_size": 63488 00:28:21.853 }, 00:28:21.853 { 00:28:21.853 "name": "pt2", 00:28:21.853 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:21.853 "is_configured": true, 00:28:21.853 "data_offset": 2048, 00:28:21.853 "data_size": 63488 00:28:21.853 }, 00:28:21.853 { 00:28:21.853 "name": "pt3", 00:28:21.853 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:21.853 "is_configured": true, 00:28:21.853 "data_offset": 2048, 00:28:21.853 "data_size": 63488 00:28:21.853 }, 00:28:21.853 { 00:28:21.853 "name": null, 00:28:21.853 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:21.853 "is_configured": false, 00:28:21.853 "data_offset": 2048, 00:28:21.853 "data_size": 63488 00:28:21.853 } 00:28:21.853 ] 00:28:21.853 }' 00:28:21.853 13:12:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:21.853 13:12:25 -- common/autotest_common.sh@10 -- # set +x 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@462 -- # i=3 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:22.788 [2024-04-17 13:12:26.902037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:22.788 [2024-04-17 13:12:26.902149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.788 [2024-04-17 13:12:26.902195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:22.788 [2024-04-17 13:12:26.902218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.788 [2024-04-17 13:12:26.902749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.788 [2024-04-17 13:12:26.902794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:22.788 [2024-04-17 13:12:26.902909] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:22.788 [2024-04-17 13:12:26.902940] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:22.788 [2024-04-17 13:12:26.903086] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:28:22.788 [2024-04-17 13:12:26.903110] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:22.788 [2024-04-17 13:12:26.903246] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:22.788 [2024-04-17 13:12:26.909754] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:28:22.788 [2024-04-17 13:12:26.909785] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:28:22.788 [2024-04-17 13:12:26.910077] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.788 pt4 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.788 13:12:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.354 13:12:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:23.354 "name": "raid_bdev1", 00:28:23.354 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:23.354 "strip_size_kb": 64, 00:28:23.354 "state": "online", 00:28:23.354 "raid_level": "raid5f", 00:28:23.354 "superblock": true, 00:28:23.354 "num_base_bdevs": 4, 00:28:23.354 "num_base_bdevs_discovered": 3, 00:28:23.354 "num_base_bdevs_operational": 3, 00:28:23.354 "base_bdevs_list": [ 00:28:23.354 { 00:28:23.354 "name": null, 00:28:23.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.354 "is_configured": false, 00:28:23.354 "data_offset": 2048, 00:28:23.354 "data_size": 63488 00:28:23.354 }, 00:28:23.354 { 00:28:23.354 "name": "pt2", 00:28:23.354 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:23.354 "is_configured": true, 00:28:23.354 "data_offset": 2048, 00:28:23.354 "data_size": 63488 00:28:23.354 }, 00:28:23.354 { 00:28:23.354 "name": "pt3", 00:28:23.354 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:23.354 "is_configured": true, 00:28:23.354 "data_offset": 2048, 00:28:23.354 "data_size": 63488 00:28:23.354 }, 00:28:23.354 { 00:28:23.354 "name": "pt4", 00:28:23.354 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:23.354 "is_configured": true, 00:28:23.354 "data_offset": 2048, 00:28:23.354 "data_size": 63488 00:28:23.354 } 00:28:23.354 ] 00:28:23.354 }' 00:28:23.354 13:12:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:23.354 13:12:27 -- common/autotest_common.sh@10 -- # set +x 00:28:23.921 13:12:27 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:28:23.921 13:12:27 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:24.179 [2024-04-17 13:12:28.181544] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.179 [2024-04-17 13:12:28.181593] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:24.179 [2024-04-17 13:12:28.181675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.179 [2024-04-17 13:12:28.181759] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.179 [2024-04-17 13:12:28.181772] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:28:24.179 13:12:28 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.179 13:12:28 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:28:24.436 13:12:28 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:28:24.436 13:12:28 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:28:24.436 13:12:28 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.694 [2024-04-17 13:12:28.745684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.694 [2024-04-17 13:12:28.745799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.694 [2024-04-17 13:12:28.745851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:24.694 [2024-04-17 13:12:28.745876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.694 [2024-04-17 13:12:28.748453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.694 [2024-04-17 13:12:28.748536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.694 [2024-04-17 13:12:28.748655] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:24.694 [2024-04-17 13:12:28.748709] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.694 pt1 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.694 13:12:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.952 13:12:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:24.952 "name": "raid_bdev1", 00:28:24.952 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:24.952 "strip_size_kb": 64, 00:28:24.952 "state": "configuring", 00:28:24.952 "raid_level": "raid5f", 00:28:24.952 "superblock": true, 00:28:24.952 "num_base_bdevs": 4, 00:28:24.952 "num_base_bdevs_discovered": 1, 00:28:24.952 "num_base_bdevs_operational": 4, 00:28:24.952 "base_bdevs_list": [ 00:28:24.952 { 00:28:24.952 "name": "pt1", 00:28:24.952 "uuid": "0f76869b-ae1b-54b1-a9db-66a7365a3186", 00:28:24.952 "is_configured": true, 00:28:24.952 "data_offset": 2048, 00:28:24.952 "data_size": 63488 00:28:24.952 }, 00:28:24.952 { 00:28:24.952 "name": null, 00:28:24.952 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:24.952 "is_configured": false, 00:28:24.952 "data_offset": 2048, 00:28:24.952 "data_size": 63488 00:28:24.952 }, 00:28:24.952 { 00:28:24.952 "name": null, 00:28:24.952 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:24.952 "is_configured": false, 00:28:24.952 "data_offset": 2048, 00:28:24.952 "data_size": 63488 00:28:24.952 }, 00:28:24.952 { 00:28:24.952 "name": null, 00:28:24.952 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:24.952 "is_configured": false, 00:28:24.952 "data_offset": 2048, 00:28:24.952 "data_size": 63488 00:28:24.952 } 00:28:24.952 ] 00:28:24.952 }' 00:28:24.952 13:12:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:24.952 13:12:29 -- common/autotest_common.sh@10 -- # set +x 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:25.889 13:12:29 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:26.146 13:12:30 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:26.146 13:12:30 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:26.146 13:12:30 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:26.405 13:12:30 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:26.405 13:12:30 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:26.405 13:12:30 -- bdev/bdev_raid.sh@489 -- # i=3 00:28:26.405 13:12:30 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:26.664 [2024-04-17 13:12:30.718165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:26.664 [2024-04-17 13:12:30.718296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.664 [2024-04-17 13:12:30.718334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:28:26.664 [2024-04-17 13:12:30.718363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.664 [2024-04-17 13:12:30.718941] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.664 [2024-04-17 13:12:30.719009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:26.664 [2024-04-17 13:12:30.719130] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:26.664 [2024-04-17 13:12:30.719147] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:26.664 [2024-04-17 13:12:30.719154] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:26.664 [2024-04-17 13:12:30.719176] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:28:26.664 [2024-04-17 13:12:30.719245] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:26.664 pt4 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.664 13:12:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.923 13:12:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:26.923 "name": "raid_bdev1", 00:28:26.923 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:26.923 "strip_size_kb": 64, 00:28:26.923 "state": "configuring", 00:28:26.923 "raid_level": "raid5f", 00:28:26.923 "superblock": true, 00:28:26.923 "num_base_bdevs": 4, 00:28:26.923 "num_base_bdevs_discovered": 1, 00:28:26.923 "num_base_bdevs_operational": 3, 00:28:26.923 "base_bdevs_list": [ 00:28:26.923 { 00:28:26.923 "name": null, 00:28:26.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.923 "is_configured": false, 00:28:26.923 "data_offset": 2048, 00:28:26.923 "data_size": 63488 00:28:26.923 }, 00:28:26.923 { 00:28:26.923 "name": null, 00:28:26.923 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:26.923 "is_configured": false, 00:28:26.923 "data_offset": 2048, 00:28:26.923 "data_size": 63488 00:28:26.923 }, 00:28:26.923 { 00:28:26.923 "name": null, 00:28:26.923 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:26.923 "is_configured": false, 00:28:26.923 "data_offset": 2048, 00:28:26.923 "data_size": 63488 00:28:26.923 }, 00:28:26.923 { 00:28:26.923 "name": "pt4", 00:28:26.923 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:26.923 "is_configured": true, 00:28:26.923 "data_offset": 2048, 00:28:26.923 "data_size": 63488 00:28:26.923 } 00:28:26.923 ] 00:28:26.923 }' 00:28:26.923 13:12:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:26.923 13:12:30 -- common/autotest_common.sh@10 -- # set +x 00:28:27.490 13:12:31 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:28:27.490 13:12:31 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:27.490 13:12:31 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:27.750 [2024-04-17 13:12:31.882408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:27.750 [2024-04-17 13:12:31.882544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.750 [2024-04-17 13:12:31.882589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:28:27.750 [2024-04-17 13:12:31.882619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.750 [2024-04-17 13:12:31.883136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.750 [2024-04-17 13:12:31.883214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:27.750 [2024-04-17 13:12:31.883327] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:27.750 [2024-04-17 13:12:31.883356] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:27.750 pt2 00:28:28.009 13:12:31 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:28.009 13:12:31 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:28.009 13:12:31 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:28.009 [2024-04-17 13:12:32.106489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:28.009 [2024-04-17 13:12:32.106621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.009 [2024-04-17 13:12:32.106660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:28:28.009 [2024-04-17 13:12:32.106688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.009 [2024-04-17 13:12:32.107209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.009 [2024-04-17 13:12:32.107283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:28.009 [2024-04-17 13:12:32.107400] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:28.009 [2024-04-17 13:12:32.107430] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:28.009 [2024-04-17 13:12:32.107599] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:28:28.009 [2024-04-17 13:12:32.107624] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:28.009 [2024-04-17 13:12:32.107720] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:28:28.009 [2024-04-17 13:12:32.114241] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:28:28.009 [2024-04-17 13:12:32.114271] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:28:28.009 [2024-04-17 13:12:32.114528] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.009 pt3 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.009 13:12:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.268 13:12:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:28.268 "name": "raid_bdev1", 00:28:28.268 "uuid": "cc531de7-26c6-4bdf-a1ca-4dbed1306156", 00:28:28.268 "strip_size_kb": 64, 00:28:28.268 "state": "online", 00:28:28.268 "raid_level": "raid5f", 00:28:28.268 "superblock": true, 00:28:28.268 "num_base_bdevs": 4, 00:28:28.268 "num_base_bdevs_discovered": 3, 00:28:28.268 "num_base_bdevs_operational": 3, 00:28:28.268 "base_bdevs_list": [ 00:28:28.268 { 00:28:28.268 "name": null, 00:28:28.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.268 "is_configured": false, 00:28:28.268 "data_offset": 2048, 00:28:28.268 "data_size": 63488 00:28:28.268 }, 00:28:28.268 { 00:28:28.268 "name": "pt2", 00:28:28.268 "uuid": "aa8e8571-8d30-5e8f-9849-8868e85da1b4", 00:28:28.268 "is_configured": true, 00:28:28.268 "data_offset": 2048, 00:28:28.268 "data_size": 63488 00:28:28.268 }, 00:28:28.268 { 00:28:28.268 "name": "pt3", 00:28:28.268 "uuid": "fbd47617-30f9-5e05-9307-8e992a153772", 00:28:28.268 "is_configured": true, 00:28:28.268 "data_offset": 2048, 00:28:28.268 "data_size": 63488 00:28:28.268 }, 00:28:28.268 { 00:28:28.268 "name": "pt4", 00:28:28.268 "uuid": "dde48c19-9c48-5478-ac0a-e12aee460c18", 00:28:28.268 "is_configured": true, 00:28:28.268 "data_offset": 2048, 00:28:28.268 "data_size": 63488 00:28:28.268 } 00:28:28.268 ] 00:28:28.268 }' 00:28:28.268 13:12:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:28.268 13:12:32 -- common/autotest_common.sh@10 -- # set +x 00:28:29.205 13:12:33 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:29.205 13:12:33 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:28:29.205 [2024-04-17 13:12:33.274140] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:29.205 13:12:33 -- bdev/bdev_raid.sh@506 -- # '[' cc531de7-26c6-4bdf-a1ca-4dbed1306156 '!=' cc531de7-26c6-4bdf-a1ca-4dbed1306156 ']' 00:28:29.205 13:12:33 -- bdev/bdev_raid.sh@511 -- # killprocess 139458 00:28:29.205 13:12:33 -- common/autotest_common.sh@924 -- # '[' -z 139458 ']' 00:28:29.205 13:12:33 -- common/autotest_common.sh@928 -- # kill -0 139458 00:28:29.205 13:12:33 -- common/autotest_common.sh@929 -- # uname 00:28:29.205 13:12:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:29.205 13:12:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 139458 00:28:29.205 killing process with pid 139458 00:28:29.205 13:12:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:29.205 13:12:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:29.205 13:12:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 139458' 00:28:29.205 13:12:33 -- common/autotest_common.sh@943 -- # kill 139458 00:28:29.205 13:12:33 -- common/autotest_common.sh@948 -- # wait 139458 00:28:29.205 [2024-04-17 13:12:33.311649] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:29.205 [2024-04-17 13:12:33.311738] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:29.205 [2024-04-17 13:12:33.311862] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:29.205 [2024-04-17 13:12:33.311902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:28:29.773 [2024-04-17 13:12:33.618369] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:30.712 ************************************ 00:28:30.712 END TEST raid5f_superblock_test 00:28:30.712 ************************************ 00:28:30.712 13:12:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:28:30.712 00:28:30.712 real 0m24.778s 00:28:30.712 user 0m45.816s 00:28:30.712 sys 0m2.845s 00:28:30.712 13:12:34 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:28:30.712 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:28:30.712 13:12:34 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:28:30.712 13:12:34 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:28:30.712 13:12:34 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:28:30.712 13:12:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:30.712 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:28:30.712 ************************************ 00:28:30.712 START TEST raid5f_rebuild_test 00:28:30.713 ************************************ 00:28:30.713 13:12:34 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid5f 4 false false 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=140192 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140192 /var/tmp/spdk-raid.sock 00:28:30.713 13:12:34 -- common/autotest_common.sh@817 -- # '[' -z 140192 ']' 00:28:30.713 13:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:30.713 13:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:30.713 13:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:30.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:30.713 13:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:30.713 13:12:34 -- common/autotest_common.sh@10 -- # set +x 00:28:30.713 13:12:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:30.981 [2024-04-17 13:12:34.857262] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:28:30.981 [2024-04-17 13:12:34.857710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140192 ] 00:28:30.981 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:30.981 Zero copy mechanism will not be used. 00:28:30.981 [2024-04-17 13:12:35.022068] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.239 [2024-04-17 13:12:35.227272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.498 [2024-04-17 13:12:35.423045] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:31.757 13:12:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:31.757 13:12:35 -- common/autotest_common.sh@850 -- # return 0 00:28:31.757 13:12:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:31.757 13:12:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:31.757 13:12:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:32.016 BaseBdev1 00:28:32.016 13:12:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:32.016 13:12:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:32.016 13:12:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:32.275 BaseBdev2 00:28:32.275 13:12:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:32.275 13:12:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:32.275 13:12:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:32.534 BaseBdev3 00:28:32.534 13:12:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:32.534 13:12:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:32.534 13:12:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:32.793 BaseBdev4 00:28:32.793 13:12:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:33.051 spare_malloc 00:28:33.051 13:12:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:33.310 spare_delay 00:28:33.311 13:12:37 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:33.570 [2024-04-17 13:12:37.550183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:33.570 [2024-04-17 13:12:37.550350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.570 [2024-04-17 13:12:37.550391] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:33.570 [2024-04-17 13:12:37.550435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.570 [2024-04-17 13:12:37.553025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.570 [2024-04-17 13:12:37.553093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:33.570 spare 00:28:33.570 13:12:37 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:33.829 [2024-04-17 13:12:37.762288] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:33.829 [2024-04-17 13:12:37.764228] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:33.829 [2024-04-17 13:12:37.764293] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:33.829 [2024-04-17 13:12:37.764335] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:33.829 [2024-04-17 13:12:37.764418] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:28:33.829 [2024-04-17 13:12:37.764437] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:33.829 [2024-04-17 13:12:37.764587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:28:33.829 [2024-04-17 13:12:37.770949] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:28:33.829 [2024-04-17 13:12:37.770971] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:28:33.829 [2024-04-17 13:12:37.771221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.829 13:12:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.088 13:12:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:34.088 "name": "raid_bdev1", 00:28:34.088 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:34.088 "strip_size_kb": 64, 00:28:34.088 "state": "online", 00:28:34.088 "raid_level": "raid5f", 00:28:34.088 "superblock": false, 00:28:34.088 "num_base_bdevs": 4, 00:28:34.088 "num_base_bdevs_discovered": 4, 00:28:34.088 "num_base_bdevs_operational": 4, 00:28:34.088 "base_bdevs_list": [ 00:28:34.088 { 00:28:34.088 "name": "BaseBdev1", 00:28:34.088 "uuid": "e80fda5c-4c1d-4971-89ed-38c3fff5c510", 00:28:34.088 "is_configured": true, 00:28:34.088 "data_offset": 0, 00:28:34.088 "data_size": 65536 00:28:34.088 }, 00:28:34.088 { 00:28:34.088 "name": "BaseBdev2", 00:28:34.088 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:34.088 "is_configured": true, 00:28:34.088 "data_offset": 0, 00:28:34.088 "data_size": 65536 00:28:34.088 }, 00:28:34.088 { 00:28:34.088 "name": "BaseBdev3", 00:28:34.088 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:34.088 "is_configured": true, 00:28:34.088 "data_offset": 0, 00:28:34.088 "data_size": 65536 00:28:34.088 }, 00:28:34.088 { 00:28:34.088 "name": "BaseBdev4", 00:28:34.088 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:34.088 "is_configured": true, 00:28:34.088 "data_offset": 0, 00:28:34.088 "data_size": 65536 00:28:34.088 } 00:28:34.088 ] 00:28:34.088 }' 00:28:34.088 13:12:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:34.088 13:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:34.655 13:12:38 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:34.655 13:12:38 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:28:34.914 [2024-04-17 13:12:38.986907] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:34.914 13:12:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:28:34.914 13:12:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:34.914 13:12:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.173 13:12:39 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:28:35.173 13:12:39 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:28:35.173 13:12:39 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:28:35.173 13:12:39 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@12 -- # local i 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:35.174 13:12:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:35.432 [2024-04-17 13:12:39.482866] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:28:35.432 /dev/nbd0 00:28:35.432 13:12:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:35.432 13:12:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:35.432 13:12:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:28:35.432 13:12:39 -- common/autotest_common.sh@855 -- # local i 00:28:35.432 13:12:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:28:35.432 13:12:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:28:35.432 13:12:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:28:35.432 13:12:39 -- common/autotest_common.sh@859 -- # break 00:28:35.432 13:12:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:35.432 13:12:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:35.432 13:12:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:35.432 1+0 records in 00:28:35.432 1+0 records out 00:28:35.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204053 s, 20.1 MB/s 00:28:35.432 13:12:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:35.432 13:12:39 -- common/autotest_common.sh@872 -- # size=4096 00:28:35.432 13:12:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:35.432 13:12:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:28:35.432 13:12:39 -- common/autotest_common.sh@875 -- # return 0 00:28:35.432 13:12:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:35.432 13:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:35.432 13:12:39 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:28:35.432 13:12:39 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:28:35.432 13:12:39 -- bdev/bdev_raid.sh@582 -- # echo 192 00:28:35.432 13:12:39 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:28:35.999 512+0 records in 00:28:35.999 512+0 records out 00:28:35.999 100663296 bytes (101 MB, 96 MiB) copied, 0.55465 s, 181 MB/s 00:28:35.999 13:12:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@51 -- # local i 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:35.999 13:12:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:36.257 [2024-04-17 13:12:40.399910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@41 -- # break 00:28:36.257 13:12:40 -- bdev/nbd_common.sh@45 -- # return 0 00:28:36.257 13:12:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:36.516 [2024-04-17 13:12:40.611238] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.516 13:12:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.774 13:12:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:36.774 "name": "raid_bdev1", 00:28:36.774 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:36.774 "strip_size_kb": 64, 00:28:36.774 "state": "online", 00:28:36.774 "raid_level": "raid5f", 00:28:36.774 "superblock": false, 00:28:36.774 "num_base_bdevs": 4, 00:28:36.774 "num_base_bdevs_discovered": 3, 00:28:36.774 "num_base_bdevs_operational": 3, 00:28:36.774 "base_bdevs_list": [ 00:28:36.774 { 00:28:36.774 "name": null, 00:28:36.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.774 "is_configured": false, 00:28:36.774 "data_offset": 0, 00:28:36.774 "data_size": 65536 00:28:36.774 }, 00:28:36.774 { 00:28:36.774 "name": "BaseBdev2", 00:28:36.774 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:36.774 "is_configured": true, 00:28:36.774 "data_offset": 0, 00:28:36.774 "data_size": 65536 00:28:36.774 }, 00:28:36.774 { 00:28:36.774 "name": "BaseBdev3", 00:28:36.774 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:36.774 "is_configured": true, 00:28:36.774 "data_offset": 0, 00:28:36.774 "data_size": 65536 00:28:36.774 }, 00:28:36.774 { 00:28:36.774 "name": "BaseBdev4", 00:28:36.774 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:36.774 "is_configured": true, 00:28:36.774 "data_offset": 0, 00:28:36.774 "data_size": 65536 00:28:36.774 } 00:28:36.774 ] 00:28:36.774 }' 00:28:36.774 13:12:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:36.774 13:12:40 -- common/autotest_common.sh@10 -- # set +x 00:28:37.708 13:12:41 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:37.708 [2024-04-17 13:12:41.751596] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:37.708 [2024-04-17 13:12:41.751652] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:37.708 [2024-04-17 13:12:41.764924] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:28:37.708 [2024-04-17 13:12:41.773204] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:37.708 13:12:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.643 13:12:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.902 13:12:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:38.902 "name": "raid_bdev1", 00:28:38.902 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:38.902 "strip_size_kb": 64, 00:28:38.902 "state": "online", 00:28:38.902 "raid_level": "raid5f", 00:28:38.902 "superblock": false, 00:28:38.902 "num_base_bdevs": 4, 00:28:38.902 "num_base_bdevs_discovered": 4, 00:28:38.902 "num_base_bdevs_operational": 4, 00:28:38.902 "process": { 00:28:38.902 "type": "rebuild", 00:28:38.902 "target": "spare", 00:28:38.902 "progress": { 00:28:38.902 "blocks": 23040, 00:28:38.902 "percent": 11 00:28:38.902 } 00:28:38.902 }, 00:28:38.902 "base_bdevs_list": [ 00:28:38.902 { 00:28:38.902 "name": "spare", 00:28:38.902 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:38.902 "is_configured": true, 00:28:38.902 "data_offset": 0, 00:28:38.902 "data_size": 65536 00:28:38.902 }, 00:28:38.902 { 00:28:38.902 "name": "BaseBdev2", 00:28:38.902 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:38.902 "is_configured": true, 00:28:38.902 "data_offset": 0, 00:28:38.902 "data_size": 65536 00:28:38.902 }, 00:28:38.902 { 00:28:38.902 "name": "BaseBdev3", 00:28:38.902 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:38.902 "is_configured": true, 00:28:38.902 "data_offset": 0, 00:28:38.902 "data_size": 65536 00:28:38.902 }, 00:28:38.902 { 00:28:38.902 "name": "BaseBdev4", 00:28:38.902 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:38.902 "is_configured": true, 00:28:38.902 "data_offset": 0, 00:28:38.902 "data_size": 65536 00:28:38.902 } 00:28:38.902 ] 00:28:38.902 }' 00:28:38.902 13:12:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:38.903 13:12:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.903 13:12:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:39.161 13:12:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.161 13:12:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:39.419 [2024-04-17 13:12:43.354621] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.419 [2024-04-17 13:12:43.386698] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:39.419 [2024-04-17 13:12:43.386857] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.419 13:12:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.705 13:12:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:39.705 "name": "raid_bdev1", 00:28:39.705 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:39.705 "strip_size_kb": 64, 00:28:39.705 "state": "online", 00:28:39.705 "raid_level": "raid5f", 00:28:39.705 "superblock": false, 00:28:39.705 "num_base_bdevs": 4, 00:28:39.705 "num_base_bdevs_discovered": 3, 00:28:39.705 "num_base_bdevs_operational": 3, 00:28:39.705 "base_bdevs_list": [ 00:28:39.705 { 00:28:39.705 "name": null, 00:28:39.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.705 "is_configured": false, 00:28:39.705 "data_offset": 0, 00:28:39.705 "data_size": 65536 00:28:39.705 }, 00:28:39.705 { 00:28:39.705 "name": "BaseBdev2", 00:28:39.705 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:39.705 "is_configured": true, 00:28:39.705 "data_offset": 0, 00:28:39.705 "data_size": 65536 00:28:39.705 }, 00:28:39.705 { 00:28:39.705 "name": "BaseBdev3", 00:28:39.705 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:39.705 "is_configured": true, 00:28:39.705 "data_offset": 0, 00:28:39.705 "data_size": 65536 00:28:39.705 }, 00:28:39.705 { 00:28:39.705 "name": "BaseBdev4", 00:28:39.705 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:39.705 "is_configured": true, 00:28:39.705 "data_offset": 0, 00:28:39.705 "data_size": 65536 00:28:39.705 } 00:28:39.705 ] 00:28:39.705 }' 00:28:39.705 13:12:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:39.705 13:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.289 13:12:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.547 13:12:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:40.547 "name": "raid_bdev1", 00:28:40.547 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:40.547 "strip_size_kb": 64, 00:28:40.547 "state": "online", 00:28:40.547 "raid_level": "raid5f", 00:28:40.547 "superblock": false, 00:28:40.547 "num_base_bdevs": 4, 00:28:40.547 "num_base_bdevs_discovered": 3, 00:28:40.547 "num_base_bdevs_operational": 3, 00:28:40.547 "base_bdevs_list": [ 00:28:40.547 { 00:28:40.547 "name": null, 00:28:40.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.547 "is_configured": false, 00:28:40.547 "data_offset": 0, 00:28:40.547 "data_size": 65536 00:28:40.547 }, 00:28:40.547 { 00:28:40.547 "name": "BaseBdev2", 00:28:40.547 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:40.547 "is_configured": true, 00:28:40.547 "data_offset": 0, 00:28:40.547 "data_size": 65536 00:28:40.547 }, 00:28:40.547 { 00:28:40.547 "name": "BaseBdev3", 00:28:40.547 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:40.547 "is_configured": true, 00:28:40.547 "data_offset": 0, 00:28:40.547 "data_size": 65536 00:28:40.547 }, 00:28:40.547 { 00:28:40.547 "name": "BaseBdev4", 00:28:40.547 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:40.547 "is_configured": true, 00:28:40.547 "data_offset": 0, 00:28:40.547 "data_size": 65536 00:28:40.547 } 00:28:40.547 ] 00:28:40.547 }' 00:28:40.547 13:12:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:40.547 13:12:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:40.547 13:12:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:40.805 13:12:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:40.805 13:12:44 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:40.805 [2024-04-17 13:12:44.902374] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:40.805 [2024-04-17 13:12:44.902450] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:40.805 [2024-04-17 13:12:44.914417] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:28:40.805 [2024-04-17 13:12:44.922706] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:40.805 13:12:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.178 13:12:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.178 13:12:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:42.178 "name": "raid_bdev1", 00:28:42.179 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:42.179 "strip_size_kb": 64, 00:28:42.179 "state": "online", 00:28:42.179 "raid_level": "raid5f", 00:28:42.179 "superblock": false, 00:28:42.179 "num_base_bdevs": 4, 00:28:42.179 "num_base_bdevs_discovered": 4, 00:28:42.179 "num_base_bdevs_operational": 4, 00:28:42.179 "process": { 00:28:42.179 "type": "rebuild", 00:28:42.179 "target": "spare", 00:28:42.179 "progress": { 00:28:42.179 "blocks": 23040, 00:28:42.179 "percent": 11 00:28:42.179 } 00:28:42.179 }, 00:28:42.179 "base_bdevs_list": [ 00:28:42.179 { 00:28:42.179 "name": "spare", 00:28:42.179 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:42.179 "is_configured": true, 00:28:42.179 "data_offset": 0, 00:28:42.179 "data_size": 65536 00:28:42.179 }, 00:28:42.179 { 00:28:42.179 "name": "BaseBdev2", 00:28:42.179 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:42.179 "is_configured": true, 00:28:42.179 "data_offset": 0, 00:28:42.179 "data_size": 65536 00:28:42.179 }, 00:28:42.179 { 00:28:42.179 "name": "BaseBdev3", 00:28:42.179 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:42.179 "is_configured": true, 00:28:42.179 "data_offset": 0, 00:28:42.179 "data_size": 65536 00:28:42.179 }, 00:28:42.179 { 00:28:42.179 "name": "BaseBdev4", 00:28:42.179 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:42.179 "is_configured": true, 00:28:42.179 "data_offset": 0, 00:28:42.179 "data_size": 65536 00:28:42.179 } 00:28:42.179 ] 00:28:42.179 }' 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@657 -- # local timeout=787 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.179 13:12:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.437 13:12:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:42.437 "name": "raid_bdev1", 00:28:42.437 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:42.437 "strip_size_kb": 64, 00:28:42.437 "state": "online", 00:28:42.437 "raid_level": "raid5f", 00:28:42.437 "superblock": false, 00:28:42.437 "num_base_bdevs": 4, 00:28:42.437 "num_base_bdevs_discovered": 4, 00:28:42.437 "num_base_bdevs_operational": 4, 00:28:42.437 "process": { 00:28:42.437 "type": "rebuild", 00:28:42.437 "target": "spare", 00:28:42.437 "progress": { 00:28:42.437 "blocks": 30720, 00:28:42.437 "percent": 15 00:28:42.437 } 00:28:42.437 }, 00:28:42.437 "base_bdevs_list": [ 00:28:42.437 { 00:28:42.437 "name": "spare", 00:28:42.437 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:42.437 "is_configured": true, 00:28:42.437 "data_offset": 0, 00:28:42.438 "data_size": 65536 00:28:42.438 }, 00:28:42.438 { 00:28:42.438 "name": "BaseBdev2", 00:28:42.438 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:42.438 "is_configured": true, 00:28:42.438 "data_offset": 0, 00:28:42.438 "data_size": 65536 00:28:42.438 }, 00:28:42.438 { 00:28:42.438 "name": "BaseBdev3", 00:28:42.438 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:42.438 "is_configured": true, 00:28:42.438 "data_offset": 0, 00:28:42.438 "data_size": 65536 00:28:42.438 }, 00:28:42.438 { 00:28:42.438 "name": "BaseBdev4", 00:28:42.438 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:42.438 "is_configured": true, 00:28:42.438 "data_offset": 0, 00:28:42.438 "data_size": 65536 00:28:42.438 } 00:28:42.438 ] 00:28:42.438 }' 00:28:42.438 13:12:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:42.697 13:12:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.697 13:12:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:42.697 13:12:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.697 13:12:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.633 13:12:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.892 13:12:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:43.892 "name": "raid_bdev1", 00:28:43.892 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:43.892 "strip_size_kb": 64, 00:28:43.892 "state": "online", 00:28:43.892 "raid_level": "raid5f", 00:28:43.892 "superblock": false, 00:28:43.892 "num_base_bdevs": 4, 00:28:43.892 "num_base_bdevs_discovered": 4, 00:28:43.892 "num_base_bdevs_operational": 4, 00:28:43.892 "process": { 00:28:43.892 "type": "rebuild", 00:28:43.892 "target": "spare", 00:28:43.892 "progress": { 00:28:43.892 "blocks": 55680, 00:28:43.892 "percent": 28 00:28:43.892 } 00:28:43.892 }, 00:28:43.892 "base_bdevs_list": [ 00:28:43.892 { 00:28:43.892 "name": "spare", 00:28:43.892 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:43.892 "is_configured": true, 00:28:43.892 "data_offset": 0, 00:28:43.892 "data_size": 65536 00:28:43.892 }, 00:28:43.892 { 00:28:43.892 "name": "BaseBdev2", 00:28:43.892 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:43.892 "is_configured": true, 00:28:43.892 "data_offset": 0, 00:28:43.892 "data_size": 65536 00:28:43.892 }, 00:28:43.892 { 00:28:43.892 "name": "BaseBdev3", 00:28:43.892 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:43.892 "is_configured": true, 00:28:43.892 "data_offset": 0, 00:28:43.892 "data_size": 65536 00:28:43.892 }, 00:28:43.892 { 00:28:43.892 "name": "BaseBdev4", 00:28:43.892 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:43.892 "is_configured": true, 00:28:43.892 "data_offset": 0, 00:28:43.892 "data_size": 65536 00:28:43.892 } 00:28:43.892 ] 00:28:43.892 }' 00:28:43.892 13:12:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:43.892 13:12:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:43.892 13:12:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:43.892 13:12:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:43.892 13:12:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:45.309 "name": "raid_bdev1", 00:28:45.309 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:45.309 "strip_size_kb": 64, 00:28:45.309 "state": "online", 00:28:45.309 "raid_level": "raid5f", 00:28:45.309 "superblock": false, 00:28:45.309 "num_base_bdevs": 4, 00:28:45.309 "num_base_bdevs_discovered": 4, 00:28:45.309 "num_base_bdevs_operational": 4, 00:28:45.309 "process": { 00:28:45.309 "type": "rebuild", 00:28:45.309 "target": "spare", 00:28:45.309 "progress": { 00:28:45.309 "blocks": 82560, 00:28:45.309 "percent": 41 00:28:45.309 } 00:28:45.309 }, 00:28:45.309 "base_bdevs_list": [ 00:28:45.309 { 00:28:45.309 "name": "spare", 00:28:45.309 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:45.309 "is_configured": true, 00:28:45.309 "data_offset": 0, 00:28:45.309 "data_size": 65536 00:28:45.309 }, 00:28:45.309 { 00:28:45.309 "name": "BaseBdev2", 00:28:45.309 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:45.309 "is_configured": true, 00:28:45.309 "data_offset": 0, 00:28:45.309 "data_size": 65536 00:28:45.309 }, 00:28:45.309 { 00:28:45.309 "name": "BaseBdev3", 00:28:45.309 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:45.309 "is_configured": true, 00:28:45.309 "data_offset": 0, 00:28:45.309 "data_size": 65536 00:28:45.309 }, 00:28:45.309 { 00:28:45.309 "name": "BaseBdev4", 00:28:45.309 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:45.309 "is_configured": true, 00:28:45.309 "data_offset": 0, 00:28:45.309 "data_size": 65536 00:28:45.309 } 00:28:45.309 ] 00:28:45.309 }' 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.309 13:12:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:45.568 13:12:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.568 13:12:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.503 13:12:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:46.763 "name": "raid_bdev1", 00:28:46.763 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:46.763 "strip_size_kb": 64, 00:28:46.763 "state": "online", 00:28:46.763 "raid_level": "raid5f", 00:28:46.763 "superblock": false, 00:28:46.763 "num_base_bdevs": 4, 00:28:46.763 "num_base_bdevs_discovered": 4, 00:28:46.763 "num_base_bdevs_operational": 4, 00:28:46.763 "process": { 00:28:46.763 "type": "rebuild", 00:28:46.763 "target": "spare", 00:28:46.763 "progress": { 00:28:46.763 "blocks": 109440, 00:28:46.763 "percent": 55 00:28:46.763 } 00:28:46.763 }, 00:28:46.763 "base_bdevs_list": [ 00:28:46.763 { 00:28:46.763 "name": "spare", 00:28:46.763 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:46.763 "is_configured": true, 00:28:46.763 "data_offset": 0, 00:28:46.763 "data_size": 65536 00:28:46.763 }, 00:28:46.763 { 00:28:46.763 "name": "BaseBdev2", 00:28:46.763 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:46.763 "is_configured": true, 00:28:46.763 "data_offset": 0, 00:28:46.763 "data_size": 65536 00:28:46.763 }, 00:28:46.763 { 00:28:46.763 "name": "BaseBdev3", 00:28:46.763 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:46.763 "is_configured": true, 00:28:46.763 "data_offset": 0, 00:28:46.763 "data_size": 65536 00:28:46.763 }, 00:28:46.763 { 00:28:46.763 "name": "BaseBdev4", 00:28:46.763 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:46.763 "is_configured": true, 00:28:46.763 "data_offset": 0, 00:28:46.763 "data_size": 65536 00:28:46.763 } 00:28:46.763 ] 00:28:46.763 }' 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.763 13:12:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.700 13:12:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:48.268 "name": "raid_bdev1", 00:28:48.268 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:48.268 "strip_size_kb": 64, 00:28:48.268 "state": "online", 00:28:48.268 "raid_level": "raid5f", 00:28:48.268 "superblock": false, 00:28:48.268 "num_base_bdevs": 4, 00:28:48.268 "num_base_bdevs_discovered": 4, 00:28:48.268 "num_base_bdevs_operational": 4, 00:28:48.268 "process": { 00:28:48.268 "type": "rebuild", 00:28:48.268 "target": "spare", 00:28:48.268 "progress": { 00:28:48.268 "blocks": 136320, 00:28:48.268 "percent": 69 00:28:48.268 } 00:28:48.268 }, 00:28:48.268 "base_bdevs_list": [ 00:28:48.268 { 00:28:48.268 "name": "spare", 00:28:48.268 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:48.268 "is_configured": true, 00:28:48.268 "data_offset": 0, 00:28:48.268 "data_size": 65536 00:28:48.268 }, 00:28:48.268 { 00:28:48.268 "name": "BaseBdev2", 00:28:48.268 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:48.268 "is_configured": true, 00:28:48.268 "data_offset": 0, 00:28:48.268 "data_size": 65536 00:28:48.268 }, 00:28:48.268 { 00:28:48.268 "name": "BaseBdev3", 00:28:48.268 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:48.268 "is_configured": true, 00:28:48.268 "data_offset": 0, 00:28:48.268 "data_size": 65536 00:28:48.268 }, 00:28:48.268 { 00:28:48.268 "name": "BaseBdev4", 00:28:48.268 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:48.268 "is_configured": true, 00:28:48.268 "data_offset": 0, 00:28:48.268 "data_size": 65536 00:28:48.268 } 00:28:48.268 ] 00:28:48.268 }' 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:48.268 13:12:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.205 13:12:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.464 13:12:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:49.464 "name": "raid_bdev1", 00:28:49.464 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:49.464 "strip_size_kb": 64, 00:28:49.464 "state": "online", 00:28:49.464 "raid_level": "raid5f", 00:28:49.464 "superblock": false, 00:28:49.464 "num_base_bdevs": 4, 00:28:49.464 "num_base_bdevs_discovered": 4, 00:28:49.464 "num_base_bdevs_operational": 4, 00:28:49.464 "process": { 00:28:49.464 "type": "rebuild", 00:28:49.464 "target": "spare", 00:28:49.464 "progress": { 00:28:49.464 "blocks": 163200, 00:28:49.464 "percent": 83 00:28:49.464 } 00:28:49.464 }, 00:28:49.464 "base_bdevs_list": [ 00:28:49.464 { 00:28:49.464 "name": "spare", 00:28:49.464 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:49.464 "is_configured": true, 00:28:49.464 "data_offset": 0, 00:28:49.464 "data_size": 65536 00:28:49.464 }, 00:28:49.464 { 00:28:49.464 "name": "BaseBdev2", 00:28:49.464 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:49.464 "is_configured": true, 00:28:49.464 "data_offset": 0, 00:28:49.464 "data_size": 65536 00:28:49.464 }, 00:28:49.464 { 00:28:49.464 "name": "BaseBdev3", 00:28:49.464 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:49.464 "is_configured": true, 00:28:49.465 "data_offset": 0, 00:28:49.465 "data_size": 65536 00:28:49.465 }, 00:28:49.465 { 00:28:49.465 "name": "BaseBdev4", 00:28:49.465 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:49.465 "is_configured": true, 00:28:49.465 "data_offset": 0, 00:28:49.465 "data_size": 65536 00:28:49.465 } 00:28:49.465 ] 00:28:49.465 }' 00:28:49.465 13:12:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:49.465 13:12:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.465 13:12:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:49.724 13:12:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.724 13:12:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.668 13:12:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:50.927 "name": "raid_bdev1", 00:28:50.927 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:50.927 "strip_size_kb": 64, 00:28:50.927 "state": "online", 00:28:50.927 "raid_level": "raid5f", 00:28:50.927 "superblock": false, 00:28:50.927 "num_base_bdevs": 4, 00:28:50.927 "num_base_bdevs_discovered": 4, 00:28:50.927 "num_base_bdevs_operational": 4, 00:28:50.927 "process": { 00:28:50.927 "type": "rebuild", 00:28:50.927 "target": "spare", 00:28:50.927 "progress": { 00:28:50.927 "blocks": 188160, 00:28:50.927 "percent": 95 00:28:50.927 } 00:28:50.927 }, 00:28:50.927 "base_bdevs_list": [ 00:28:50.927 { 00:28:50.927 "name": "spare", 00:28:50.927 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:50.927 "is_configured": true, 00:28:50.927 "data_offset": 0, 00:28:50.927 "data_size": 65536 00:28:50.927 }, 00:28:50.927 { 00:28:50.927 "name": "BaseBdev2", 00:28:50.927 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:50.927 "is_configured": true, 00:28:50.927 "data_offset": 0, 00:28:50.927 "data_size": 65536 00:28:50.927 }, 00:28:50.927 { 00:28:50.927 "name": "BaseBdev3", 00:28:50.927 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:50.927 "is_configured": true, 00:28:50.927 "data_offset": 0, 00:28:50.927 "data_size": 65536 00:28:50.927 }, 00:28:50.927 { 00:28:50.927 "name": "BaseBdev4", 00:28:50.927 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:50.927 "is_configured": true, 00:28:50.927 "data_offset": 0, 00:28:50.927 "data_size": 65536 00:28:50.927 } 00:28:50.927 ] 00:28:50.927 }' 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.927 13:12:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:51.186 [2024-04-17 13:12:55.309550] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:51.186 [2024-04-17 13:12:55.309634] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:51.186 [2024-04-17 13:12:55.309715] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:52.120 13:12:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:52.120 13:12:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.120 13:12:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:52.120 13:12:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:52.120 13:12:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:52.121 13:12:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:52.121 13:12:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.121 13:12:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.121 13:12:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:52.121 "name": "raid_bdev1", 00:28:52.121 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:52.121 "strip_size_kb": 64, 00:28:52.121 "state": "online", 00:28:52.121 "raid_level": "raid5f", 00:28:52.121 "superblock": false, 00:28:52.121 "num_base_bdevs": 4, 00:28:52.121 "num_base_bdevs_discovered": 4, 00:28:52.121 "num_base_bdevs_operational": 4, 00:28:52.121 "base_bdevs_list": [ 00:28:52.121 { 00:28:52.121 "name": "spare", 00:28:52.121 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:52.121 "is_configured": true, 00:28:52.121 "data_offset": 0, 00:28:52.121 "data_size": 65536 00:28:52.121 }, 00:28:52.121 { 00:28:52.121 "name": "BaseBdev2", 00:28:52.121 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:52.121 "is_configured": true, 00:28:52.121 "data_offset": 0, 00:28:52.121 "data_size": 65536 00:28:52.121 }, 00:28:52.121 { 00:28:52.121 "name": "BaseBdev3", 00:28:52.121 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:52.121 "is_configured": true, 00:28:52.121 "data_offset": 0, 00:28:52.121 "data_size": 65536 00:28:52.121 }, 00:28:52.121 { 00:28:52.121 "name": "BaseBdev4", 00:28:52.121 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:52.121 "is_configured": true, 00:28:52.121 "data_offset": 0, 00:28:52.121 "data_size": 65536 00:28:52.121 } 00:28:52.121 ] 00:28:52.121 }' 00:28:52.121 13:12:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@660 -- # break 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.379 13:12:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:52.638 "name": "raid_bdev1", 00:28:52.638 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:52.638 "strip_size_kb": 64, 00:28:52.638 "state": "online", 00:28:52.638 "raid_level": "raid5f", 00:28:52.638 "superblock": false, 00:28:52.638 "num_base_bdevs": 4, 00:28:52.638 "num_base_bdevs_discovered": 4, 00:28:52.638 "num_base_bdevs_operational": 4, 00:28:52.638 "base_bdevs_list": [ 00:28:52.638 { 00:28:52.638 "name": "spare", 00:28:52.638 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:52.638 "is_configured": true, 00:28:52.638 "data_offset": 0, 00:28:52.638 "data_size": 65536 00:28:52.638 }, 00:28:52.638 { 00:28:52.638 "name": "BaseBdev2", 00:28:52.638 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:52.638 "is_configured": true, 00:28:52.638 "data_offset": 0, 00:28:52.638 "data_size": 65536 00:28:52.638 }, 00:28:52.638 { 00:28:52.638 "name": "BaseBdev3", 00:28:52.638 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:52.638 "is_configured": true, 00:28:52.638 "data_offset": 0, 00:28:52.638 "data_size": 65536 00:28:52.638 }, 00:28:52.638 { 00:28:52.638 "name": "BaseBdev4", 00:28:52.638 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:52.638 "is_configured": true, 00:28:52.638 "data_offset": 0, 00:28:52.638 "data_size": 65536 00:28:52.638 } 00:28:52.638 ] 00:28:52.638 }' 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.638 13:12:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.898 13:12:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:52.898 "name": "raid_bdev1", 00:28:52.898 "uuid": "2304bb37-2cc4-494f-8be7-d5641364042c", 00:28:52.898 "strip_size_kb": 64, 00:28:52.898 "state": "online", 00:28:52.898 "raid_level": "raid5f", 00:28:52.898 "superblock": false, 00:28:52.898 "num_base_bdevs": 4, 00:28:52.898 "num_base_bdevs_discovered": 4, 00:28:52.898 "num_base_bdevs_operational": 4, 00:28:52.898 "base_bdevs_list": [ 00:28:52.898 { 00:28:52.898 "name": "spare", 00:28:52.898 "uuid": "e154248e-4a0a-5b14-ab90-5e0092635767", 00:28:52.898 "is_configured": true, 00:28:52.898 "data_offset": 0, 00:28:52.898 "data_size": 65536 00:28:52.898 }, 00:28:52.898 { 00:28:52.898 "name": "BaseBdev2", 00:28:52.898 "uuid": "3510ff80-0733-4a1a-a67a-98c4b360847d", 00:28:52.898 "is_configured": true, 00:28:52.898 "data_offset": 0, 00:28:52.898 "data_size": 65536 00:28:52.898 }, 00:28:52.898 { 00:28:52.898 "name": "BaseBdev3", 00:28:52.898 "uuid": "c59210ce-5534-4ea2-855c-c4e00e8b412c", 00:28:52.898 "is_configured": true, 00:28:52.898 "data_offset": 0, 00:28:52.898 "data_size": 65536 00:28:52.898 }, 00:28:52.898 { 00:28:52.898 "name": "BaseBdev4", 00:28:52.898 "uuid": "2e1cc900-444b-4c35-a0fc-ec994917d571", 00:28:52.898 "is_configured": true, 00:28:52.898 "data_offset": 0, 00:28:52.898 "data_size": 65536 00:28:52.898 } 00:28:52.898 ] 00:28:52.898 }' 00:28:52.898 13:12:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:52.898 13:12:56 -- common/autotest_common.sh@10 -- # set +x 00:28:53.837 13:12:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:53.837 [2024-04-17 13:12:57.927787] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:53.837 [2024-04-17 13:12:57.927836] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:53.837 [2024-04-17 13:12:57.927941] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:53.837 [2024-04-17 13:12:57.928029] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:53.837 [2024-04-17 13:12:57.928041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:28:53.838 13:12:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.838 13:12:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:28:54.096 13:12:58 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:28:54.096 13:12:58 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:28:54.096 13:12:58 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@12 -- # local i 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:54.096 13:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:54.355 /dev/nbd0 00:28:54.355 13:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:54.355 13:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:54.355 13:12:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:28:54.355 13:12:58 -- common/autotest_common.sh@855 -- # local i 00:28:54.355 13:12:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:28:54.355 13:12:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:28:54.355 13:12:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:28:54.355 13:12:58 -- common/autotest_common.sh@859 -- # break 00:28:54.355 13:12:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:54.355 13:12:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:54.355 13:12:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.355 1+0 records in 00:28:54.355 1+0 records out 00:28:54.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220775 s, 18.6 MB/s 00:28:54.355 13:12:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.355 13:12:58 -- common/autotest_common.sh@872 -- # size=4096 00:28:54.355 13:12:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.355 13:12:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:28:54.355 13:12:58 -- common/autotest_common.sh@875 -- # return 0 00:28:54.355 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.355 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:54.355 13:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:54.614 /dev/nbd1 00:28:54.614 13:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:54.614 13:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:54.614 13:12:58 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:28:54.614 13:12:58 -- common/autotest_common.sh@855 -- # local i 00:28:54.614 13:12:58 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:28:54.614 13:12:58 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:28:54.614 13:12:58 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:28:54.614 13:12:58 -- common/autotest_common.sh@859 -- # break 00:28:54.614 13:12:58 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:54.614 13:12:58 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:54.614 13:12:58 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.614 1+0 records in 00:28:54.614 1+0 records out 00:28:54.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283446 s, 14.5 MB/s 00:28:54.614 13:12:58 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.614 13:12:58 -- common/autotest_common.sh@872 -- # size=4096 00:28:54.614 13:12:58 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.873 13:12:58 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:28:54.873 13:12:58 -- common/autotest_common.sh@875 -- # return 0 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:54.873 13:12:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:54.873 13:12:58 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@51 -- # local i 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:54.873 13:12:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:55.132 13:12:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@41 -- # break 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:55.390 13:12:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@41 -- # break 00:28:55.647 13:12:59 -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.647 13:12:59 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:28:55.647 13:12:59 -- bdev/bdev_raid.sh@709 -- # killprocess 140192 00:28:55.647 13:12:59 -- common/autotest_common.sh@924 -- # '[' -z 140192 ']' 00:28:55.647 13:12:59 -- common/autotest_common.sh@928 -- # kill -0 140192 00:28:55.647 13:12:59 -- common/autotest_common.sh@929 -- # uname 00:28:55.647 13:12:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:55.647 13:12:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 140192 00:28:55.647 13:12:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:55.647 13:12:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:55.647 13:12:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 140192' 00:28:55.647 killing process with pid 140192 00:28:55.647 13:12:59 -- common/autotest_common.sh@943 -- # kill 140192 00:28:55.647 Received shutdown signal, test time was about 60.000000 seconds 00:28:55.647 00:28:55.647 Latency(us) 00:28:55.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.647 =================================================================================================================== 00:28:55.647 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:55.647 [2024-04-17 13:12:59.714980] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:55.647 13:12:59 -- common/autotest_common.sh@948 -- # wait 140192 00:28:56.213 [2024-04-17 13:13:00.126879] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:57.168 ************************************ 00:28:57.168 END TEST raid5f_rebuild_test 00:28:57.168 ************************************ 00:28:57.168 13:13:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:28:57.168 00:28:57.168 real 0m26.481s 00:28:57.168 user 0m39.127s 00:28:57.168 sys 0m2.781s 00:28:57.168 13:13:01 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:28:57.168 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.453 13:13:01 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:28:57.453 13:13:01 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:28:57.454 13:13:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:57.454 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.454 ************************************ 00:28:57.454 START TEST raid5f_rebuild_test_sb 00:28:57.454 ************************************ 00:28:57.454 13:13:01 -- common/autotest_common.sh@1099 -- # raid_rebuild_test raid5f 4 true false 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=140864 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140864 /var/tmp/spdk-raid.sock 00:28:57.454 13:13:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:57.454 13:13:01 -- common/autotest_common.sh@817 -- # '[' -z 140864 ']' 00:28:57.454 13:13:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:57.454 13:13:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:57.454 13:13:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:57.454 13:13:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:57.454 13:13:01 -- common/autotest_common.sh@10 -- # set +x 00:28:57.454 [2024-04-17 13:13:01.415087] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:28:57.454 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:57.454 Zero copy mechanism will not be used. 00:28:57.454 [2024-04-17 13:13:01.415244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140864 ] 00:28:57.454 [2024-04-17 13:13:01.574771] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.713 [2024-04-17 13:13:01.789009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.971 [2024-04-17 13:13:01.991127] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:58.229 13:13:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:58.229 13:13:02 -- common/autotest_common.sh@850 -- # return 0 00:28:58.229 13:13:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:58.229 13:13:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:58.229 13:13:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:58.796 BaseBdev1_malloc 00:28:58.796 13:13:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:58.796 [2024-04-17 13:13:02.886596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:58.796 [2024-04-17 13:13:02.886703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:58.796 [2024-04-17 13:13:02.886739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:58.796 [2024-04-17 13:13:02.886791] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:58.796 [2024-04-17 13:13:02.889395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:58.796 [2024-04-17 13:13:02.889450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:58.796 BaseBdev1 00:28:58.796 13:13:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:58.796 13:13:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:58.796 13:13:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:59.055 BaseBdev2_malloc 00:28:59.055 13:13:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:59.313 [2024-04-17 13:13:03.398230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:59.313 [2024-04-17 13:13:03.398340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.313 [2024-04-17 13:13:03.398388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:59.313 [2024-04-17 13:13:03.398452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.313 [2024-04-17 13:13:03.401022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.313 [2024-04-17 13:13:03.401076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:59.313 BaseBdev2 00:28:59.313 13:13:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:59.313 13:13:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:59.313 13:13:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:59.571 BaseBdev3_malloc 00:28:59.571 13:13:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:59.829 [2024-04-17 13:13:03.890003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:59.829 [2024-04-17 13:13:03.890122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.829 [2024-04-17 13:13:03.890167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:59.829 [2024-04-17 13:13:03.890215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.829 [2024-04-17 13:13:03.892708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.829 [2024-04-17 13:13:03.892770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:59.829 BaseBdev3 00:28:59.829 13:13:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:59.829 13:13:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:28:59.829 13:13:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:00.087 BaseBdev4_malloc 00:29:00.087 13:13:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:00.344 [2024-04-17 13:13:04.382072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:00.344 [2024-04-17 13:13:04.382199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.344 [2024-04-17 13:13:04.382238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:00.344 [2024-04-17 13:13:04.382285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.344 [2024-04-17 13:13:04.384798] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.344 [2024-04-17 13:13:04.384857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:00.344 BaseBdev4 00:29:00.344 13:13:04 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:00.602 spare_malloc 00:29:00.602 13:13:04 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:00.860 spare_delay 00:29:00.860 13:13:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:01.118 [2024-04-17 13:13:05.114056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:01.118 [2024-04-17 13:13:05.114154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.118 [2024-04-17 13:13:05.114192] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:01.118 [2024-04-17 13:13:05.114240] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.118 [2024-04-17 13:13:05.116771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.118 [2024-04-17 13:13:05.116842] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:01.118 spare 00:29:01.118 13:13:05 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:01.376 [2024-04-17 13:13:05.350211] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:01.376 [2024-04-17 13:13:05.352449] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:01.376 [2024-04-17 13:13:05.352542] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:01.377 [2024-04-17 13:13:05.352605] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:01.377 [2024-04-17 13:13:05.352861] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:29:01.377 [2024-04-17 13:13:05.352886] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:01.377 [2024-04-17 13:13:05.353013] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:01.377 [2024-04-17 13:13:05.359917] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:29:01.377 [2024-04-17 13:13:05.359980] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:29:01.377 [2024-04-17 13:13:05.360234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.377 13:13:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.635 13:13:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:01.635 "name": "raid_bdev1", 00:29:01.635 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:01.635 "strip_size_kb": 64, 00:29:01.635 "state": "online", 00:29:01.635 "raid_level": "raid5f", 00:29:01.635 "superblock": true, 00:29:01.635 "num_base_bdevs": 4, 00:29:01.635 "num_base_bdevs_discovered": 4, 00:29:01.635 "num_base_bdevs_operational": 4, 00:29:01.635 "base_bdevs_list": [ 00:29:01.635 { 00:29:01.635 "name": "BaseBdev1", 00:29:01.635 "uuid": "17a20b0e-23f1-55de-ad41-6779d7c5166d", 00:29:01.635 "is_configured": true, 00:29:01.635 "data_offset": 2048, 00:29:01.635 "data_size": 63488 00:29:01.635 }, 00:29:01.635 { 00:29:01.635 "name": "BaseBdev2", 00:29:01.635 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:01.635 "is_configured": true, 00:29:01.635 "data_offset": 2048, 00:29:01.635 "data_size": 63488 00:29:01.635 }, 00:29:01.635 { 00:29:01.635 "name": "BaseBdev3", 00:29:01.635 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:01.635 "is_configured": true, 00:29:01.635 "data_offset": 2048, 00:29:01.635 "data_size": 63488 00:29:01.635 }, 00:29:01.635 { 00:29:01.635 "name": "BaseBdev4", 00:29:01.635 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:01.635 "is_configured": true, 00:29:01.635 "data_offset": 2048, 00:29:01.635 "data_size": 63488 00:29:01.635 } 00:29:01.635 ] 00:29:01.635 }' 00:29:01.635 13:13:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:01.635 13:13:05 -- common/autotest_common.sh@10 -- # set +x 00:29:02.202 13:13:06 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:02.202 13:13:06 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:29:02.769 [2024-04-17 13:13:06.624032] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:29:02.769 13:13:06 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@12 -- # local i 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:02.769 13:13:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:03.027 [2024-04-17 13:13:07.172145] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:03.286 /dev/nbd0 00:29:03.286 13:13:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:03.286 13:13:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:03.286 13:13:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:03.286 13:13:07 -- common/autotest_common.sh@855 -- # local i 00:29:03.286 13:13:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:03.286 13:13:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:03.286 13:13:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:03.286 13:13:07 -- common/autotest_common.sh@859 -- # break 00:29:03.286 13:13:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:03.286 13:13:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:03.286 13:13:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.286 1+0 records in 00:29:03.286 1+0 records out 00:29:03.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295695 s, 13.9 MB/s 00:29:03.286 13:13:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.286 13:13:07 -- common/autotest_common.sh@872 -- # size=4096 00:29:03.286 13:13:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.286 13:13:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:03.286 13:13:07 -- common/autotest_common.sh@875 -- # return 0 00:29:03.286 13:13:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.286 13:13:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:03.286 13:13:07 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:29:03.286 13:13:07 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:29:03.286 13:13:07 -- bdev/bdev_raid.sh@582 -- # echo 192 00:29:03.286 13:13:07 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:29:03.854 496+0 records in 00:29:03.854 496+0 records out 00:29:03.854 97517568 bytes (98 MB, 93 MiB) copied, 0.586988 s, 166 MB/s 00:29:03.854 13:13:07 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@51 -- # local i 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.854 13:13:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:04.112 [2024-04-17 13:13:08.070690] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@41 -- # break 00:29:04.112 13:13:08 -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.112 13:13:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:04.371 [2024-04-17 13:13:08.386300] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.371 13:13:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.630 13:13:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:04.630 "name": "raid_bdev1", 00:29:04.630 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:04.630 "strip_size_kb": 64, 00:29:04.630 "state": "online", 00:29:04.630 "raid_level": "raid5f", 00:29:04.630 "superblock": true, 00:29:04.630 "num_base_bdevs": 4, 00:29:04.630 "num_base_bdevs_discovered": 3, 00:29:04.630 "num_base_bdevs_operational": 3, 00:29:04.630 "base_bdevs_list": [ 00:29:04.630 { 00:29:04.630 "name": null, 00:29:04.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.630 "is_configured": false, 00:29:04.630 "data_offset": 2048, 00:29:04.630 "data_size": 63488 00:29:04.630 }, 00:29:04.630 { 00:29:04.630 "name": "BaseBdev2", 00:29:04.630 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:04.630 "is_configured": true, 00:29:04.630 "data_offset": 2048, 00:29:04.630 "data_size": 63488 00:29:04.630 }, 00:29:04.630 { 00:29:04.630 "name": "BaseBdev3", 00:29:04.630 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:04.630 "is_configured": true, 00:29:04.630 "data_offset": 2048, 00:29:04.630 "data_size": 63488 00:29:04.630 }, 00:29:04.630 { 00:29:04.630 "name": "BaseBdev4", 00:29:04.630 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:04.630 "is_configured": true, 00:29:04.630 "data_offset": 2048, 00:29:04.630 "data_size": 63488 00:29:04.630 } 00:29:04.630 ] 00:29:04.630 }' 00:29:04.630 13:13:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:04.630 13:13:08 -- common/autotest_common.sh@10 -- # set +x 00:29:05.567 13:13:09 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:05.567 [2024-04-17 13:13:09.566600] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:29:05.567 [2024-04-17 13:13:09.566655] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.567 [2024-04-17 13:13:09.580535] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bd00 00:29:05.567 [2024-04-17 13:13:09.589552] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:05.567 13:13:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.503 13:13:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.763 13:13:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:06.763 "name": "raid_bdev1", 00:29:06.763 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:06.763 "strip_size_kb": 64, 00:29:06.763 "state": "online", 00:29:06.763 "raid_level": "raid5f", 00:29:06.763 "superblock": true, 00:29:06.763 "num_base_bdevs": 4, 00:29:06.763 "num_base_bdevs_discovered": 4, 00:29:06.763 "num_base_bdevs_operational": 4, 00:29:06.763 "process": { 00:29:06.763 "type": "rebuild", 00:29:06.763 "target": "spare", 00:29:06.763 "progress": { 00:29:06.763 "blocks": 23040, 00:29:06.763 "percent": 12 00:29:06.763 } 00:29:06.763 }, 00:29:06.763 "base_bdevs_list": [ 00:29:06.763 { 00:29:06.763 "name": "spare", 00:29:06.763 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:06.763 "is_configured": true, 00:29:06.763 "data_offset": 2048, 00:29:06.763 "data_size": 63488 00:29:06.763 }, 00:29:06.763 { 00:29:06.763 "name": "BaseBdev2", 00:29:06.763 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:06.763 "is_configured": true, 00:29:06.763 "data_offset": 2048, 00:29:06.763 "data_size": 63488 00:29:06.763 }, 00:29:06.763 { 00:29:06.763 "name": "BaseBdev3", 00:29:06.763 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:06.763 "is_configured": true, 00:29:06.763 "data_offset": 2048, 00:29:06.763 "data_size": 63488 00:29:06.763 }, 00:29:06.763 { 00:29:06.763 "name": "BaseBdev4", 00:29:06.763 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:06.763 "is_configured": true, 00:29:06.763 "data_offset": 2048, 00:29:06.763 "data_size": 63488 00:29:06.763 } 00:29:06.763 ] 00:29:06.763 }' 00:29:06.763 13:13:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:07.022 13:13:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:07.022 13:13:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:07.022 13:13:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:07.022 13:13:10 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:07.281 [2024-04-17 13:13:11.259811] bdev_raid.c:2123:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:07.281 [2024-04-17 13:13:11.304968] bdev_raid.c:2442:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:07.281 [2024-04-17 13:13:11.305078] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.281 13:13:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.541 13:13:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:07.541 "name": "raid_bdev1", 00:29:07.541 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:07.541 "strip_size_kb": 64, 00:29:07.541 "state": "online", 00:29:07.541 "raid_level": "raid5f", 00:29:07.541 "superblock": true, 00:29:07.541 "num_base_bdevs": 4, 00:29:07.541 "num_base_bdevs_discovered": 3, 00:29:07.541 "num_base_bdevs_operational": 3, 00:29:07.541 "base_bdevs_list": [ 00:29:07.541 { 00:29:07.541 "name": null, 00:29:07.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.541 "is_configured": false, 00:29:07.541 "data_offset": 2048, 00:29:07.541 "data_size": 63488 00:29:07.541 }, 00:29:07.541 { 00:29:07.541 "name": "BaseBdev2", 00:29:07.541 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:07.541 "is_configured": true, 00:29:07.541 "data_offset": 2048, 00:29:07.541 "data_size": 63488 00:29:07.541 }, 00:29:07.541 { 00:29:07.541 "name": "BaseBdev3", 00:29:07.541 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:07.541 "is_configured": true, 00:29:07.541 "data_offset": 2048, 00:29:07.541 "data_size": 63488 00:29:07.541 }, 00:29:07.541 { 00:29:07.541 "name": "BaseBdev4", 00:29:07.541 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:07.541 "is_configured": true, 00:29:07.541 "data_offset": 2048, 00:29:07.541 "data_size": 63488 00:29:07.541 } 00:29:07.541 ] 00:29:07.541 }' 00:29:07.541 13:13:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:07.541 13:13:11 -- common/autotest_common.sh@10 -- # set +x 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.478 13:13:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:08.478 "name": "raid_bdev1", 00:29:08.478 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:08.478 "strip_size_kb": 64, 00:29:08.478 "state": "online", 00:29:08.478 "raid_level": "raid5f", 00:29:08.478 "superblock": true, 00:29:08.478 "num_base_bdevs": 4, 00:29:08.478 "num_base_bdevs_discovered": 3, 00:29:08.478 "num_base_bdevs_operational": 3, 00:29:08.478 "base_bdevs_list": [ 00:29:08.478 { 00:29:08.478 "name": null, 00:29:08.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.479 "is_configured": false, 00:29:08.479 "data_offset": 2048, 00:29:08.479 "data_size": 63488 00:29:08.479 }, 00:29:08.479 { 00:29:08.479 "name": "BaseBdev2", 00:29:08.479 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:08.479 "is_configured": true, 00:29:08.479 "data_offset": 2048, 00:29:08.479 "data_size": 63488 00:29:08.479 }, 00:29:08.479 { 00:29:08.479 "name": "BaseBdev3", 00:29:08.479 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:08.479 "is_configured": true, 00:29:08.479 "data_offset": 2048, 00:29:08.479 "data_size": 63488 00:29:08.479 }, 00:29:08.479 { 00:29:08.479 "name": "BaseBdev4", 00:29:08.479 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:08.479 "is_configured": true, 00:29:08.479 "data_offset": 2048, 00:29:08.479 "data_size": 63488 00:29:08.479 } 00:29:08.479 ] 00:29:08.479 }' 00:29:08.479 13:13:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:08.739 13:13:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:08.739 13:13:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:08.739 13:13:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:08.739 13:13:12 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:08.998 [2024-04-17 13:13:13.007137] bdev_raid.c:3247:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:29:08.998 [2024-04-17 13:13:13.007375] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:08.998 [2024-04-17 13:13:13.020024] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bea0 00:29:08.998 [2024-04-17 13:13:13.028760] bdev_raid.c:2751:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:08.998 13:13:13 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.935 13:13:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.195 13:13:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:10.195 "name": "raid_bdev1", 00:29:10.195 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:10.195 "strip_size_kb": 64, 00:29:10.195 "state": "online", 00:29:10.195 "raid_level": "raid5f", 00:29:10.195 "superblock": true, 00:29:10.195 "num_base_bdevs": 4, 00:29:10.195 "num_base_bdevs_discovered": 4, 00:29:10.195 "num_base_bdevs_operational": 4, 00:29:10.195 "process": { 00:29:10.195 "type": "rebuild", 00:29:10.195 "target": "spare", 00:29:10.195 "progress": { 00:29:10.195 "blocks": 23040, 00:29:10.195 "percent": 12 00:29:10.195 } 00:29:10.195 }, 00:29:10.195 "base_bdevs_list": [ 00:29:10.195 { 00:29:10.195 "name": "spare", 00:29:10.195 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:10.195 "is_configured": true, 00:29:10.195 "data_offset": 2048, 00:29:10.195 "data_size": 63488 00:29:10.195 }, 00:29:10.195 { 00:29:10.195 "name": "BaseBdev2", 00:29:10.195 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:10.195 "is_configured": true, 00:29:10.195 "data_offset": 2048, 00:29:10.195 "data_size": 63488 00:29:10.195 }, 00:29:10.195 { 00:29:10.195 "name": "BaseBdev3", 00:29:10.195 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:10.195 "is_configured": true, 00:29:10.195 "data_offset": 2048, 00:29:10.195 "data_size": 63488 00:29:10.195 }, 00:29:10.195 { 00:29:10.195 "name": "BaseBdev4", 00:29:10.195 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:10.195 "is_configured": true, 00:29:10.195 "data_offset": 2048, 00:29:10.195 "data_size": 63488 00:29:10.195 } 00:29:10.195 ] 00:29:10.195 }' 00:29:10.195 13:13:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:29:10.454 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@657 -- # local timeout=815 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.454 13:13:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.714 13:13:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:10.714 "name": "raid_bdev1", 00:29:10.714 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:10.714 "strip_size_kb": 64, 00:29:10.714 "state": "online", 00:29:10.714 "raid_level": "raid5f", 00:29:10.714 "superblock": true, 00:29:10.714 "num_base_bdevs": 4, 00:29:10.714 "num_base_bdevs_discovered": 4, 00:29:10.714 "num_base_bdevs_operational": 4, 00:29:10.714 "process": { 00:29:10.714 "type": "rebuild", 00:29:10.714 "target": "spare", 00:29:10.714 "progress": { 00:29:10.714 "blocks": 30720, 00:29:10.714 "percent": 16 00:29:10.714 } 00:29:10.714 }, 00:29:10.714 "base_bdevs_list": [ 00:29:10.714 { 00:29:10.714 "name": "spare", 00:29:10.714 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:10.714 "is_configured": true, 00:29:10.714 "data_offset": 2048, 00:29:10.714 "data_size": 63488 00:29:10.714 }, 00:29:10.714 { 00:29:10.714 "name": "BaseBdev2", 00:29:10.714 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:10.714 "is_configured": true, 00:29:10.714 "data_offset": 2048, 00:29:10.714 "data_size": 63488 00:29:10.714 }, 00:29:10.714 { 00:29:10.714 "name": "BaseBdev3", 00:29:10.714 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:10.714 "is_configured": true, 00:29:10.714 "data_offset": 2048, 00:29:10.714 "data_size": 63488 00:29:10.714 }, 00:29:10.714 { 00:29:10.714 "name": "BaseBdev4", 00:29:10.714 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:10.714 "is_configured": true, 00:29:10.714 "data_offset": 2048, 00:29:10.714 "data_size": 63488 00:29:10.714 } 00:29:10.714 ] 00:29:10.714 }' 00:29:10.714 13:13:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:10.714 13:13:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:10.714 13:13:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:10.972 13:13:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:10.972 13:13:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.922 13:13:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:12.181 "name": "raid_bdev1", 00:29:12.181 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:12.181 "strip_size_kb": 64, 00:29:12.181 "state": "online", 00:29:12.181 "raid_level": "raid5f", 00:29:12.181 "superblock": true, 00:29:12.181 "num_base_bdevs": 4, 00:29:12.181 "num_base_bdevs_discovered": 4, 00:29:12.181 "num_base_bdevs_operational": 4, 00:29:12.181 "process": { 00:29:12.181 "type": "rebuild", 00:29:12.181 "target": "spare", 00:29:12.181 "progress": { 00:29:12.181 "blocks": 57600, 00:29:12.181 "percent": 30 00:29:12.181 } 00:29:12.181 }, 00:29:12.181 "base_bdevs_list": [ 00:29:12.181 { 00:29:12.181 "name": "spare", 00:29:12.181 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:12.181 "is_configured": true, 00:29:12.181 "data_offset": 2048, 00:29:12.181 "data_size": 63488 00:29:12.181 }, 00:29:12.181 { 00:29:12.181 "name": "BaseBdev2", 00:29:12.181 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:12.181 "is_configured": true, 00:29:12.181 "data_offset": 2048, 00:29:12.181 "data_size": 63488 00:29:12.181 }, 00:29:12.181 { 00:29:12.181 "name": "BaseBdev3", 00:29:12.181 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:12.181 "is_configured": true, 00:29:12.181 "data_offset": 2048, 00:29:12.181 "data_size": 63488 00:29:12.181 }, 00:29:12.181 { 00:29:12.181 "name": "BaseBdev4", 00:29:12.181 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:12.181 "is_configured": true, 00:29:12.181 "data_offset": 2048, 00:29:12.181 "data_size": 63488 00:29:12.181 } 00:29:12.181 ] 00:29:12.181 }' 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:12.181 13:13:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.556 13:13:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:13.556 "name": "raid_bdev1", 00:29:13.556 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:13.556 "strip_size_kb": 64, 00:29:13.556 "state": "online", 00:29:13.556 "raid_level": "raid5f", 00:29:13.556 "superblock": true, 00:29:13.556 "num_base_bdevs": 4, 00:29:13.556 "num_base_bdevs_discovered": 4, 00:29:13.556 "num_base_bdevs_operational": 4, 00:29:13.556 "process": { 00:29:13.556 "type": "rebuild", 00:29:13.556 "target": "spare", 00:29:13.556 "progress": { 00:29:13.556 "blocks": 84480, 00:29:13.556 "percent": 44 00:29:13.556 } 00:29:13.556 }, 00:29:13.556 "base_bdevs_list": [ 00:29:13.556 { 00:29:13.556 "name": "spare", 00:29:13.556 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:13.557 "is_configured": true, 00:29:13.557 "data_offset": 2048, 00:29:13.557 "data_size": 63488 00:29:13.557 }, 00:29:13.557 { 00:29:13.557 "name": "BaseBdev2", 00:29:13.557 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:13.557 "is_configured": true, 00:29:13.557 "data_offset": 2048, 00:29:13.557 "data_size": 63488 00:29:13.557 }, 00:29:13.557 { 00:29:13.557 "name": "BaseBdev3", 00:29:13.557 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:13.557 "is_configured": true, 00:29:13.557 "data_offset": 2048, 00:29:13.557 "data_size": 63488 00:29:13.557 }, 00:29:13.557 { 00:29:13.557 "name": "BaseBdev4", 00:29:13.557 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:13.557 "is_configured": true, 00:29:13.557 "data_offset": 2048, 00:29:13.557 "data_size": 63488 00:29:13.557 } 00:29:13.557 ] 00:29:13.557 }' 00:29:13.557 13:13:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:13.557 13:13:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:13.557 13:13:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:13.557 13:13:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.557 13:13:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.933 13:13:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:14.933 "name": "raid_bdev1", 00:29:14.933 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:14.933 "strip_size_kb": 64, 00:29:14.933 "state": "online", 00:29:14.933 "raid_level": "raid5f", 00:29:14.933 "superblock": true, 00:29:14.933 "num_base_bdevs": 4, 00:29:14.933 "num_base_bdevs_discovered": 4, 00:29:14.933 "num_base_bdevs_operational": 4, 00:29:14.933 "process": { 00:29:14.933 "type": "rebuild", 00:29:14.933 "target": "spare", 00:29:14.933 "progress": { 00:29:14.933 "blocks": 111360, 00:29:14.933 "percent": 58 00:29:14.933 } 00:29:14.933 }, 00:29:14.933 "base_bdevs_list": [ 00:29:14.933 { 00:29:14.933 "name": "spare", 00:29:14.933 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:14.933 "is_configured": true, 00:29:14.933 "data_offset": 2048, 00:29:14.933 "data_size": 63488 00:29:14.933 }, 00:29:14.933 { 00:29:14.933 "name": "BaseBdev2", 00:29:14.933 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:14.933 "is_configured": true, 00:29:14.933 "data_offset": 2048, 00:29:14.933 "data_size": 63488 00:29:14.933 }, 00:29:14.933 { 00:29:14.933 "name": "BaseBdev3", 00:29:14.933 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:14.933 "is_configured": true, 00:29:14.933 "data_offset": 2048, 00:29:14.933 "data_size": 63488 00:29:14.933 }, 00:29:14.933 { 00:29:14.933 "name": "BaseBdev4", 00:29:14.933 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:14.933 "is_configured": true, 00:29:14.934 "data_offset": 2048, 00:29:14.934 "data_size": 63488 00:29:14.934 } 00:29:14.934 ] 00:29:14.934 }' 00:29:14.934 13:13:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:14.934 13:13:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:14.934 13:13:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:15.192 13:13:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.192 13:13:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.125 13:13:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:16.383 "name": "raid_bdev1", 00:29:16.383 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:16.383 "strip_size_kb": 64, 00:29:16.383 "state": "online", 00:29:16.383 "raid_level": "raid5f", 00:29:16.383 "superblock": true, 00:29:16.383 "num_base_bdevs": 4, 00:29:16.383 "num_base_bdevs_discovered": 4, 00:29:16.383 "num_base_bdevs_operational": 4, 00:29:16.383 "process": { 00:29:16.383 "type": "rebuild", 00:29:16.383 "target": "spare", 00:29:16.383 "progress": { 00:29:16.383 "blocks": 138240, 00:29:16.383 "percent": 72 00:29:16.383 } 00:29:16.383 }, 00:29:16.383 "base_bdevs_list": [ 00:29:16.383 { 00:29:16.383 "name": "spare", 00:29:16.383 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:16.383 "is_configured": true, 00:29:16.383 "data_offset": 2048, 00:29:16.383 "data_size": 63488 00:29:16.383 }, 00:29:16.383 { 00:29:16.383 "name": "BaseBdev2", 00:29:16.383 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:16.383 "is_configured": true, 00:29:16.383 "data_offset": 2048, 00:29:16.383 "data_size": 63488 00:29:16.383 }, 00:29:16.383 { 00:29:16.383 "name": "BaseBdev3", 00:29:16.383 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:16.383 "is_configured": true, 00:29:16.383 "data_offset": 2048, 00:29:16.383 "data_size": 63488 00:29:16.383 }, 00:29:16.383 { 00:29:16.383 "name": "BaseBdev4", 00:29:16.383 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:16.383 "is_configured": true, 00:29:16.383 "data_offset": 2048, 00:29:16.383 "data_size": 63488 00:29:16.383 } 00:29:16.383 ] 00:29:16.383 }' 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:16.383 13:13:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:17.317 13:13:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.318 13:13:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.884 13:13:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:17.884 "name": "raid_bdev1", 00:29:17.884 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:17.884 "strip_size_kb": 64, 00:29:17.884 "state": "online", 00:29:17.885 "raid_level": "raid5f", 00:29:17.885 "superblock": true, 00:29:17.885 "num_base_bdevs": 4, 00:29:17.885 "num_base_bdevs_discovered": 4, 00:29:17.885 "num_base_bdevs_operational": 4, 00:29:17.885 "process": { 00:29:17.885 "type": "rebuild", 00:29:17.885 "target": "spare", 00:29:17.885 "progress": { 00:29:17.885 "blocks": 165120, 00:29:17.885 "percent": 86 00:29:17.885 } 00:29:17.885 }, 00:29:17.885 "base_bdevs_list": [ 00:29:17.885 { 00:29:17.885 "name": "spare", 00:29:17.885 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:17.885 "is_configured": true, 00:29:17.885 "data_offset": 2048, 00:29:17.885 "data_size": 63488 00:29:17.885 }, 00:29:17.885 { 00:29:17.885 "name": "BaseBdev2", 00:29:17.885 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:17.885 "is_configured": true, 00:29:17.885 "data_offset": 2048, 00:29:17.885 "data_size": 63488 00:29:17.885 }, 00:29:17.885 { 00:29:17.885 "name": "BaseBdev3", 00:29:17.885 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:17.885 "is_configured": true, 00:29:17.885 "data_offset": 2048, 00:29:17.885 "data_size": 63488 00:29:17.885 }, 00:29:17.885 { 00:29:17.885 "name": "BaseBdev4", 00:29:17.885 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:17.885 "is_configured": true, 00:29:17.885 "data_offset": 2048, 00:29:17.885 "data_size": 63488 00:29:17.885 } 00:29:17.885 ] 00:29:17.885 }' 00:29:17.885 13:13:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:17.885 13:13:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.885 13:13:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:17.885 13:13:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.885 13:13:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.820 13:13:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.079 [2024-04-17 13:13:23.121017] bdev_raid.c:2716:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:19.079 [2024-04-17 13:13:23.121345] bdev_raid.c:2433:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:19.079 [2024-04-17 13:13:23.121638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.079 13:13:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:19.079 "name": "raid_bdev1", 00:29:19.079 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:19.079 "strip_size_kb": 64, 00:29:19.079 "state": "online", 00:29:19.079 "raid_level": "raid5f", 00:29:19.079 "superblock": true, 00:29:19.079 "num_base_bdevs": 4, 00:29:19.079 "num_base_bdevs_discovered": 4, 00:29:19.079 "num_base_bdevs_operational": 4, 00:29:19.079 "base_bdevs_list": [ 00:29:19.079 { 00:29:19.079 "name": "spare", 00:29:19.079 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:19.079 "is_configured": true, 00:29:19.079 "data_offset": 2048, 00:29:19.079 "data_size": 63488 00:29:19.079 }, 00:29:19.079 { 00:29:19.079 "name": "BaseBdev2", 00:29:19.079 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:19.080 "is_configured": true, 00:29:19.080 "data_offset": 2048, 00:29:19.080 "data_size": 63488 00:29:19.080 }, 00:29:19.080 { 00:29:19.080 "name": "BaseBdev3", 00:29:19.080 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:19.080 "is_configured": true, 00:29:19.080 "data_offset": 2048, 00:29:19.080 "data_size": 63488 00:29:19.080 }, 00:29:19.080 { 00:29:19.080 "name": "BaseBdev4", 00:29:19.080 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:19.080 "is_configured": true, 00:29:19.080 "data_offset": 2048, 00:29:19.080 "data_size": 63488 00:29:19.080 } 00:29:19.080 ] 00:29:19.080 }' 00:29:19.080 13:13:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:19.080 13:13:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:19.080 13:13:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@660 -- # break 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.338 13:13:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:19.596 "name": "raid_bdev1", 00:29:19.596 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:19.596 "strip_size_kb": 64, 00:29:19.596 "state": "online", 00:29:19.596 "raid_level": "raid5f", 00:29:19.596 "superblock": true, 00:29:19.596 "num_base_bdevs": 4, 00:29:19.596 "num_base_bdevs_discovered": 4, 00:29:19.596 "num_base_bdevs_operational": 4, 00:29:19.596 "base_bdevs_list": [ 00:29:19.596 { 00:29:19.596 "name": "spare", 00:29:19.596 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:19.596 "is_configured": true, 00:29:19.596 "data_offset": 2048, 00:29:19.596 "data_size": 63488 00:29:19.596 }, 00:29:19.596 { 00:29:19.596 "name": "BaseBdev2", 00:29:19.596 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:19.596 "is_configured": true, 00:29:19.596 "data_offset": 2048, 00:29:19.596 "data_size": 63488 00:29:19.596 }, 00:29:19.596 { 00:29:19.596 "name": "BaseBdev3", 00:29:19.596 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:19.596 "is_configured": true, 00:29:19.596 "data_offset": 2048, 00:29:19.596 "data_size": 63488 00:29:19.596 }, 00:29:19.596 { 00:29:19.596 "name": "BaseBdev4", 00:29:19.596 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:19.596 "is_configured": true, 00:29:19.596 "data_offset": 2048, 00:29:19.596 "data_size": 63488 00:29:19.596 } 00:29:19.596 ] 00:29:19.596 }' 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.596 13:13:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.854 13:13:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:19.854 "name": "raid_bdev1", 00:29:19.854 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:19.854 "strip_size_kb": 64, 00:29:19.854 "state": "online", 00:29:19.854 "raid_level": "raid5f", 00:29:19.854 "superblock": true, 00:29:19.854 "num_base_bdevs": 4, 00:29:19.855 "num_base_bdevs_discovered": 4, 00:29:19.855 "num_base_bdevs_operational": 4, 00:29:19.855 "base_bdevs_list": [ 00:29:19.855 { 00:29:19.855 "name": "spare", 00:29:19.855 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:19.855 "is_configured": true, 00:29:19.855 "data_offset": 2048, 00:29:19.855 "data_size": 63488 00:29:19.855 }, 00:29:19.855 { 00:29:19.855 "name": "BaseBdev2", 00:29:19.855 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:19.855 "is_configured": true, 00:29:19.855 "data_offset": 2048, 00:29:19.855 "data_size": 63488 00:29:19.855 }, 00:29:19.855 { 00:29:19.855 "name": "BaseBdev3", 00:29:19.855 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:19.855 "is_configured": true, 00:29:19.855 "data_offset": 2048, 00:29:19.855 "data_size": 63488 00:29:19.855 }, 00:29:19.855 { 00:29:19.855 "name": "BaseBdev4", 00:29:19.855 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:19.855 "is_configured": true, 00:29:19.855 "data_offset": 2048, 00:29:19.855 "data_size": 63488 00:29:19.855 } 00:29:19.855 ] 00:29:19.855 }' 00:29:19.855 13:13:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:19.855 13:13:23 -- common/autotest_common.sh@10 -- # set +x 00:29:20.792 13:13:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:20.792 [2024-04-17 13:13:24.927758] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:20.792 [2024-04-17 13:13:24.928087] bdev_raid.c:1857:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:20.792 [2024-04-17 13:13:24.928282] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:20.792 [2024-04-17 13:13:24.928497] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:20.792 [2024-04-17 13:13:24.928615] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:29:21.114 13:13:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.114 13:13:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:29:21.114 13:13:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:29:21.114 13:13:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:29:21.114 13:13:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:21.114 13:13:25 -- bdev/nbd_common.sh@12 -- # local i 00:29:21.115 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:21.115 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.115 13:13:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:21.374 /dev/nbd0 00:29:21.374 13:13:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:21.374 13:13:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:21.374 13:13:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:21.374 13:13:25 -- common/autotest_common.sh@855 -- # local i 00:29:21.374 13:13:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:21.374 13:13:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:21.374 13:13:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:21.374 13:13:25 -- common/autotest_common.sh@859 -- # break 00:29:21.374 13:13:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:21.374 13:13:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:21.374 13:13:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.374 1+0 records in 00:29:21.374 1+0 records out 00:29:21.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470909 s, 8.7 MB/s 00:29:21.374 13:13:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.374 13:13:25 -- common/autotest_common.sh@872 -- # size=4096 00:29:21.374 13:13:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.374 13:13:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:21.374 13:13:25 -- common/autotest_common.sh@875 -- # return 0 00:29:21.374 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.374 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.374 13:13:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:21.633 /dev/nbd1 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:21.892 13:13:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:29:21.892 13:13:25 -- common/autotest_common.sh@855 -- # local i 00:29:21.892 13:13:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:21.892 13:13:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:21.892 13:13:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:29:21.892 13:13:25 -- common/autotest_common.sh@859 -- # break 00:29:21.892 13:13:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:21.892 13:13:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:21.892 13:13:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.892 1+0 records in 00:29:21.892 1+0 records out 00:29:21.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053201 s, 7.7 MB/s 00:29:21.892 13:13:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.892 13:13:25 -- common/autotest_common.sh@872 -- # size=4096 00:29:21.892 13:13:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.892 13:13:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:21.892 13:13:25 -- common/autotest_common.sh@875 -- # return 0 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.892 13:13:25 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:21.892 13:13:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@51 -- # local i 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.892 13:13:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@41 -- # break 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:22.151 13:13:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@41 -- # break 00:29:22.409 13:13:26 -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.409 13:13:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:29:22.409 13:13:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:22.409 13:13:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:29:22.409 13:13:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:22.976 13:13:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:22.976 [2024-04-17 13:13:27.064575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:22.976 [2024-04-17 13:13:27.065006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.976 [2024-04-17 13:13:27.065087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:22.976 [2024-04-17 13:13:27.065340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.976 [2024-04-17 13:13:27.067994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.976 [2024-04-17 13:13:27.068186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:22.976 [2024-04-17 13:13:27.068438] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:22.976 [2024-04-17 13:13:27.068604] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.976 BaseBdev1 00:29:22.976 13:13:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:22.976 13:13:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:29:22.976 13:13:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:29:23.235 13:13:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:23.493 [2024-04-17 13:13:27.581230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:23.493 [2024-04-17 13:13:27.581560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.493 [2024-04-17 13:13:27.581713] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:23.493 [2024-04-17 13:13:27.581831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.493 [2024-04-17 13:13:27.582532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.493 [2024-04-17 13:13:27.582726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:23.493 [2024-04-17 13:13:27.582946] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:29:23.493 [2024-04-17 13:13:27.583056] bdev_raid.c:3395:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:29:23.493 [2024-04-17 13:13:27.583163] bdev_raid.c:2285:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:23.493 [2024-04-17 13:13:27.583288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:29:23.493 [2024-04-17 13:13:27.583473] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.493 BaseBdev2 00:29:23.493 13:13:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:23.493 13:13:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:29:23.493 13:13:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:29:23.751 13:13:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:24.010 [2024-04-17 13:13:28.057382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:24.010 [2024-04-17 13:13:28.057662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.010 [2024-04-17 13:13:28.057854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:24.010 [2024-04-17 13:13:28.058050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.010 [2024-04-17 13:13:28.058648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.010 [2024-04-17 13:13:28.058822] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:24.010 [2024-04-17 13:13:28.059038] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:29:24.010 [2024-04-17 13:13:28.059169] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:24.010 BaseBdev3 00:29:24.010 13:13:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:24.010 13:13:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:29:24.010 13:13:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:29:24.268 13:13:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:24.529 [2024-04-17 13:13:28.533512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:24.529 [2024-04-17 13:13:28.533754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.529 [2024-04-17 13:13:28.533942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:29:24.529 [2024-04-17 13:13:28.534061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.529 [2024-04-17 13:13:28.534677] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.529 [2024-04-17 13:13:28.534838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:24.529 [2024-04-17 13:13:28.535061] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:29:24.529 [2024-04-17 13:13:28.535191] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:24.529 BaseBdev4 00:29:24.529 13:13:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:24.788 13:13:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:25.057 [2024-04-17 13:13:29.005678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:25.057 [2024-04-17 13:13:29.006052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.057 [2024-04-17 13:13:29.006232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:29:25.057 [2024-04-17 13:13:29.006352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.057 [2024-04-17 13:13:29.006994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.057 [2024-04-17 13:13:29.007188] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:25.057 [2024-04-17 13:13:29.007450] bdev_raid.c:3500:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:29:25.057 [2024-04-17 13:13:29.007585] bdev_raid.c:3087:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:25.057 spare 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.057 13:13:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.057 [2024-04-17 13:13:29.107866] bdev_raid.c:1706:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:29:25.057 [2024-04-17 13:13:29.108127] bdev_raid.c:1707:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:25.057 [2024-04-17 13:13:29.108336] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004cc50 00:29:25.057 [2024-04-17 13:13:29.114954] bdev_raid.c:1736:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:29:25.057 [2024-04-17 13:13:29.115081] bdev_raid.c:1737:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:29:25.057 [2024-04-17 13:13:29.115355] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:25.338 13:13:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:25.338 "name": "raid_bdev1", 00:29:25.338 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:25.338 "strip_size_kb": 64, 00:29:25.338 "state": "online", 00:29:25.338 "raid_level": "raid5f", 00:29:25.338 "superblock": true, 00:29:25.338 "num_base_bdevs": 4, 00:29:25.338 "num_base_bdevs_discovered": 4, 00:29:25.338 "num_base_bdevs_operational": 4, 00:29:25.338 "base_bdevs_list": [ 00:29:25.338 { 00:29:25.338 "name": "spare", 00:29:25.338 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:25.338 "is_configured": true, 00:29:25.338 "data_offset": 2048, 00:29:25.338 "data_size": 63488 00:29:25.338 }, 00:29:25.338 { 00:29:25.338 "name": "BaseBdev2", 00:29:25.338 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:25.338 "is_configured": true, 00:29:25.338 "data_offset": 2048, 00:29:25.338 "data_size": 63488 00:29:25.338 }, 00:29:25.338 { 00:29:25.338 "name": "BaseBdev3", 00:29:25.338 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:25.338 "is_configured": true, 00:29:25.338 "data_offset": 2048, 00:29:25.338 "data_size": 63488 00:29:25.338 }, 00:29:25.338 { 00:29:25.338 "name": "BaseBdev4", 00:29:25.338 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:25.338 "is_configured": true, 00:29:25.338 "data_offset": 2048, 00:29:25.338 "data_size": 63488 00:29:25.338 } 00:29:25.338 ] 00:29:25.338 }' 00:29:25.338 13:13:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:25.338 13:13:29 -- common/autotest_common.sh@10 -- # set +x 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.927 13:13:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:26.493 "name": "raid_bdev1", 00:29:26.493 "uuid": "a26587d3-ebb9-4b8f-b8e6-beb9e33778f9", 00:29:26.493 "strip_size_kb": 64, 00:29:26.493 "state": "online", 00:29:26.493 "raid_level": "raid5f", 00:29:26.493 "superblock": true, 00:29:26.493 "num_base_bdevs": 4, 00:29:26.493 "num_base_bdevs_discovered": 4, 00:29:26.493 "num_base_bdevs_operational": 4, 00:29:26.493 "base_bdevs_list": [ 00:29:26.493 { 00:29:26.493 "name": "spare", 00:29:26.493 "uuid": "cd285a0e-fa15-56b2-a1f6-ad6ece282174", 00:29:26.493 "is_configured": true, 00:29:26.493 "data_offset": 2048, 00:29:26.493 "data_size": 63488 00:29:26.493 }, 00:29:26.493 { 00:29:26.493 "name": "BaseBdev2", 00:29:26.493 "uuid": "90bdd4af-21ce-5509-a8fc-a2cb31da1172", 00:29:26.493 "is_configured": true, 00:29:26.493 "data_offset": 2048, 00:29:26.493 "data_size": 63488 00:29:26.493 }, 00:29:26.493 { 00:29:26.493 "name": "BaseBdev3", 00:29:26.493 "uuid": "43fc5596-36c5-5695-be18-461c360f1e2f", 00:29:26.493 "is_configured": true, 00:29:26.493 "data_offset": 2048, 00:29:26.493 "data_size": 63488 00:29:26.493 }, 00:29:26.493 { 00:29:26.493 "name": "BaseBdev4", 00:29:26.493 "uuid": "56792456-b2ee-58a3-ad13-a946ccf1a425", 00:29:26.493 "is_configured": true, 00:29:26.493 "data_offset": 2048, 00:29:26.493 "data_size": 63488 00:29:26.493 } 00:29:26.493 ] 00:29:26.493 }' 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.493 13:13:30 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:26.752 13:13:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.752 13:13:30 -- bdev/bdev_raid.sh@709 -- # killprocess 140864 00:29:26.752 13:13:30 -- common/autotest_common.sh@924 -- # '[' -z 140864 ']' 00:29:26.752 13:13:30 -- common/autotest_common.sh@928 -- # kill -0 140864 00:29:26.752 13:13:30 -- common/autotest_common.sh@929 -- # uname 00:29:26.752 13:13:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:26.752 13:13:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 140864 00:29:26.752 killing process with pid 140864 00:29:26.752 13:13:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:26.752 13:13:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:26.752 13:13:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 140864' 00:29:26.752 13:13:30 -- common/autotest_common.sh@943 -- # kill 140864 00:29:26.752 13:13:30 -- common/autotest_common.sh@948 -- # wait 140864 00:29:26.752 Received shutdown signal, test time was about 60.000000 seconds 00:29:26.752 00:29:26.752 Latency(us) 00:29:26.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.752 =================================================================================================================== 00:29:26.752 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:26.752 [2024-04-17 13:13:30.738409] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:26.752 [2024-04-17 13:13:30.738534] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:26.752 [2024-04-17 13:13:30.738625] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:26.752 [2024-04-17 13:13:30.738673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:29:27.011 [2024-04-17 13:13:31.155779] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:28.388 ************************************ 00:29:28.388 END TEST raid5f_rebuild_test_sb 00:29:28.388 ************************************ 00:29:28.388 13:13:32 -- bdev/bdev_raid.sh@711 -- # return 0 00:29:28.388 00:29:28.388 real 0m30.929s 00:29:28.388 user 0m48.691s 00:29:28.388 sys 0m3.273s 00:29:28.388 13:13:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:29:28.388 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.388 13:13:32 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:29:28.388 00:29:28.388 real 13m22.536s 00:29:28.388 user 22m24.654s 00:29:28.388 sys 1m35.003s 00:29:28.388 ************************************ 00:29:28.388 END TEST bdev_raid 00:29:28.388 ************************************ 00:29:28.388 13:13:32 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:29:28.388 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.388 13:13:32 -- spdk/autotest.sh@186 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:28.388 13:13:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:28.388 13:13:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:28.388 13:13:32 -- common/autotest_common.sh@10 -- # set +x 00:29:28.388 ************************************ 00:29:28.388 START TEST bdevperf_config 00:29:28.388 ************************************ 00:29:28.388 13:13:32 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:28.388 * Looking for test storage... 00:29:28.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:29:28.388 13:13:32 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:28.388 13:13:32 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:28.388 13:13:32 -- bdevperf/common.sh@9 -- # local rw=read 00:29:28.388 13:13:32 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:28.388 13:13:32 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:28.388 13:13:32 -- bdevperf/common.sh@13 -- # cat 00:29:28.388 13:13:32 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:28.388 13:13:32 -- bdevperf/common.sh@19 -- # echo 00:29:28.388 00:29:28.388 13:13:32 -- bdevperf/common.sh@20 -- # cat 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:28.388 00:29:28.388 13:13:32 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:28.388 13:13:32 -- bdevperf/common.sh@9 -- # local rw= 00:29:28.388 13:13:32 -- bdevperf/common.sh@10 -- # local filename= 00:29:28.388 13:13:32 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:28.388 13:13:32 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:28.388 13:13:32 -- bdevperf/common.sh@19 -- # echo 00:29:28.388 13:13:32 -- bdevperf/common.sh@20 -- # cat 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:28.388 00:29:28.388 13:13:32 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:28.388 13:13:32 -- bdevperf/common.sh@9 -- # local rw= 00:29:28.388 13:13:32 -- bdevperf/common.sh@10 -- # local filename= 00:29:28.388 13:13:32 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:28.388 13:13:32 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:28.388 13:13:32 -- bdevperf/common.sh@19 -- # echo 00:29:28.388 13:13:32 -- bdevperf/common.sh@20 -- # cat 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:28.388 00:29:28.388 13:13:32 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:28.388 13:13:32 -- bdevperf/common.sh@9 -- # local rw= 00:29:28.388 13:13:32 -- bdevperf/common.sh@10 -- # local filename= 00:29:28.388 13:13:32 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:28.388 13:13:32 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:28.388 13:13:32 -- bdevperf/common.sh@19 -- # echo 00:29:28.388 13:13:32 -- bdevperf/common.sh@20 -- # cat 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:28.388 00:29:28.388 13:13:32 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:28.388 13:13:32 -- bdevperf/common.sh@9 -- # local rw= 00:29:28.388 13:13:32 -- bdevperf/common.sh@10 -- # local filename= 00:29:28.388 13:13:32 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:28.388 13:13:32 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:28.388 13:13:32 -- bdevperf/common.sh@19 -- # echo 00:29:28.388 13:13:32 -- bdevperf/common.sh@20 -- # cat 00:29:28.388 13:13:32 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:33.663 13:13:36 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-17 13:13:32.577221] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:33.663 [2024-04-17 13:13:32.578090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141686 ] 00:29:33.663 Using job config with 4 jobs 00:29:33.663 [2024-04-17 13:13:32.752801] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.663 [2024-04-17 13:13:32.970393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.663 [2024-04-17 13:13:32.971701] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 [2024-04-17 13:13:33.424323] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 cpumask for '\''job0'\'' is too big 00:29:33.663 cpumask for '\''job1'\'' is too big 00:29:33.663 cpumask for '\''job2'\'' is too big 00:29:33.663 cpumask for '\''job3'\'' is too big 00:29:33.663 Running I/O for 2 seconds... 00:29:33.663 00:29:33.663 Latency(us) 00:29:33.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25734.36 25.13 0.00 0.00 9937.95 1772.45 15728.64 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25712.91 25.11 0.00 0.00 9924.46 1765.00 13881.72 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25692.52 25.09 0.00 0.00 9911.41 1839.48 11975.21 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25671.54 25.07 0.00 0.00 9897.77 1794.79 10962.39 00:29:33.663 =================================================================================================================== 00:29:33.663 Total : 102811.33 100.40 0.00 0.00 9917.90 1765.00 15728.64' 00:29:33.663 13:13:36 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-17 13:13:32.577221] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:33.663 [2024-04-17 13:13:32.578090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141686 ] 00:29:33.663 Using job config with 4 jobs 00:29:33.663 [2024-04-17 13:13:32.752801] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.663 [2024-04-17 13:13:32.970393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.663 [2024-04-17 13:13:32.971701] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 [2024-04-17 13:13:33.424323] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 cpumask for '\''job0'\'' is too big 00:29:33.663 cpumask for '\''job1'\'' is too big 00:29:33.663 cpumask for '\''job2'\'' is too big 00:29:33.663 cpumask for '\''job3'\'' is too big 00:29:33.663 Running I/O for 2 seconds... 00:29:33.663 00:29:33.663 Latency(us) 00:29:33.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25734.36 25.13 0.00 0.00 9937.95 1772.45 15728.64 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25712.91 25.11 0.00 0.00 9924.46 1765.00 13881.72 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25692.52 25.09 0.00 0.00 9911.41 1839.48 11975.21 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25671.54 25.07 0.00 0.00 9897.77 1794.79 10962.39 00:29:33.663 =================================================================================================================== 00:29:33.663 Total : 102811.33 100.40 0.00 0.00 9917.90 1765.00 15728.64' 00:29:33.663 13:13:36 -- bdevperf/common.sh@32 -- # echo '[2024-04-17 13:13:32.577221] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:33.663 [2024-04-17 13:13:32.578090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141686 ] 00:29:33.663 Using job config with 4 jobs 00:29:33.663 [2024-04-17 13:13:32.752801] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.663 [2024-04-17 13:13:32.970393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.663 [2024-04-17 13:13:32.971701] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 [2024-04-17 13:13:33.424323] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.663 cpumask for '\''job0'\'' is too big 00:29:33.663 cpumask for '\''job1'\'' is too big 00:29:33.663 cpumask for '\''job2'\'' is too big 00:29:33.663 cpumask for '\''job3'\'' is too big 00:29:33.663 Running I/O for 2 seconds... 00:29:33.663 00:29:33.663 Latency(us) 00:29:33.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25734.36 25.13 0.00 0.00 9937.95 1772.45 15728.64 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25712.91 25.11 0.00 0.00 9924.46 1765.00 13881.72 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25692.52 25.09 0.00 0.00 9911.41 1839.48 11975.21 00:29:33.663 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:33.663 Malloc0 : 2.02 25671.54 25.07 0.00 0.00 9897.77 1794.79 10962.39 00:29:33.663 =================================================================================================================== 00:29:33.663 Total : 102811.33 100.40 0.00 0.00 9917.90 1765.00 15728.64' 00:29:33.663 13:13:36 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:33.663 13:13:36 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:33.663 13:13:36 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:33.663 13:13:36 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:33.663 [2024-04-17 13:13:36.929518] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:33.663 [2024-04-17 13:13:36.929723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141748 ] 00:29:33.663 [2024-04-17 13:13:37.098383] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.663 [2024-04-17 13:13:37.339891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.663 [2024-04-17 13:13:37.341180] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.664 [2024-04-17 13:13:37.808517] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:33.664 cpumask for 'job0' is too big 00:29:33.923 cpumask for 'job1' is too big 00:29:33.923 cpumask for 'job2' is too big 00:29:33.923 cpumask for 'job3' is too big 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:37.212 Running I/O for 2 seconds... 00:29:37.212 00:29:37.212 Latency(us) 00:29:37.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.212 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:37.212 Malloc0 : 2.02 25722.21 25.12 0.00 0.00 9943.33 1824.58 16086.11 00:29:37.212 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:37.212 Malloc0 : 2.02 25702.46 25.10 0.00 0.00 9929.00 1779.90 14417.92 00:29:37.212 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:37.212 Malloc0 : 2.02 25682.91 25.08 0.00 0.00 9915.05 1891.61 12511.42 00:29:37.212 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:37.212 Malloc0 : 2.02 25663.82 25.06 0.00 0.00 9899.83 1876.71 11439.01 00:29:37.212 =================================================================================================================== 00:29:37.212 Total : 102771.39 100.36 0.00 0.00 9921.80 1779.90 16086.11' 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@27 -- # cleanup 00:29:37.212 13:13:41 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:37.212 00:29:37.212 13:13:41 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:37.212 13:13:41 -- bdevperf/common.sh@9 -- # local rw=write 00:29:37.212 13:13:41 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:37.212 13:13:41 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:37.212 13:13:41 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:37.212 13:13:41 -- bdevperf/common.sh@19 -- # echo 00:29:37.212 13:13:41 -- bdevperf/common.sh@20 -- # cat 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:37.212 13:13:41 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:37.212 00:29:37.212 13:13:41 -- bdevperf/common.sh@9 -- # local rw=write 00:29:37.212 13:13:41 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:37.212 13:13:41 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:37.212 13:13:41 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:37.212 13:13:41 -- bdevperf/common.sh@19 -- # echo 00:29:37.212 13:13:41 -- bdevperf/common.sh@20 -- # cat 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:37.212 13:13:41 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:37.212 13:13:41 -- bdevperf/common.sh@9 -- # local rw=write 00:29:37.212 13:13:41 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:37.212 00:29:37.212 13:13:41 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:37.212 13:13:41 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:37.212 13:13:41 -- bdevperf/common.sh@19 -- # echo 00:29:37.212 13:13:41 -- bdevperf/common.sh@20 -- # cat 00:29:37.212 13:13:41 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:42.482 13:13:45 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-17 13:13:41.370192] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:42.482 [2024-04-17 13:13:41.370407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141799 ] 00:29:42.482 Using job config with 3 jobs 00:29:42.482 [2024-04-17 13:13:41.543195] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.482 [2024-04-17 13:13:41.782214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.482 [2024-04-17 13:13:41.783380] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.482 [2024-04-17 13:13:42.225794] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.482 cpumask for '\''job0'\'' is too big 00:29:42.482 cpumask for '\''job1'\'' is too big 00:29:42.482 cpumask for '\''job2'\'' is too big 00:29:42.482 Running I/O for 2 seconds... 00:29:42.482 00:29:42.482 Latency(us) 00:29:42.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37471.10 36.59 0.00 0.00 6824.84 1727.77 10604.92 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37439.52 36.56 0.00 0.00 6816.42 1660.74 8877.15 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37407.16 36.53 0.00 0.00 6808.70 1675.64 8102.63 00:29:42.482 =================================================================================================================== 00:29:42.482 Total : 112317.77 109.69 0.00 0.00 6816.65 1660.74 10604.92' 00:29:42.482 13:13:45 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-17 13:13:41.370192] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:42.482 [2024-04-17 13:13:41.370407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141799 ] 00:29:42.482 Using job config with 3 jobs 00:29:42.482 [2024-04-17 13:13:41.543195] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.482 [2024-04-17 13:13:41.782214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.482 [2024-04-17 13:13:41.783380] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.482 [2024-04-17 13:13:42.225794] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.482 cpumask for '\''job0'\'' is too big 00:29:42.482 cpumask for '\''job1'\'' is too big 00:29:42.482 cpumask for '\''job2'\'' is too big 00:29:42.482 Running I/O for 2 seconds... 00:29:42.482 00:29:42.482 Latency(us) 00:29:42.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37471.10 36.59 0.00 0.00 6824.84 1727.77 10604.92 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37439.52 36.56 0.00 0.00 6816.42 1660.74 8877.15 00:29:42.482 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.482 Malloc0 : 2.02 37407.16 36.53 0.00 0.00 6808.70 1675.64 8102.63 00:29:42.482 =================================================================================================================== 00:29:42.482 Total : 112317.77 109.69 0.00 0.00 6816.65 1660.74 10604.92' 00:29:42.482 13:13:45 -- bdevperf/common.sh@32 -- # echo '[2024-04-17 13:13:41.370192] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:42.482 [2024-04-17 13:13:41.370407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141799 ] 00:29:42.482 Using job config with 3 jobs 00:29:42.482 [2024-04-17 13:13:41.543195] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.482 [2024-04-17 13:13:41.782214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.483 [2024-04-17 13:13:41.783380] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.483 [2024-04-17 13:13:42.225794] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:42.483 cpumask for '\''job0'\'' is too big 00:29:42.483 cpumask for '\''job1'\'' is too big 00:29:42.483 cpumask for '\''job2'\'' is too big 00:29:42.483 Running I/O for 2 seconds... 00:29:42.483 00:29:42.483 Latency(us) 00:29:42.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.483 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.483 Malloc0 : 2.02 37471.10 36.59 0.00 0.00 6824.84 1727.77 10604.92 00:29:42.483 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.483 Malloc0 : 2.02 37439.52 36.56 0.00 0.00 6816.42 1660.74 8877.15 00:29:42.483 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:42.483 Malloc0 : 2.02 37407.16 36.53 0.00 0.00 6808.70 1675.64 8102.63 00:29:42.483 =================================================================================================================== 00:29:42.483 Total : 112317.77 109.69 0.00 0.00 6816.65 1660.74 10604.92' 00:29:42.483 13:13:45 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:42.483 13:13:45 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@35 -- # cleanup 00:29:42.483 13:13:45 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:29:42.483 13:13:45 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:42.483 13:13:45 -- bdevperf/common.sh@9 -- # local rw=rw 00:29:42.483 13:13:45 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:29:42.483 13:13:45 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:42.483 13:13:45 -- bdevperf/common.sh@13 -- # cat 00:29:42.483 00:29:42.483 13:13:45 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:42.483 13:13:45 -- bdevperf/common.sh@19 -- # echo 00:29:42.483 13:13:45 -- bdevperf/common.sh@20 -- # cat 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@38 -- # create_job job0 00:29:42.483 13:13:45 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:42.483 13:13:45 -- bdevperf/common.sh@9 -- # local rw= 00:29:42.483 13:13:45 -- bdevperf/common.sh@10 -- # local filename= 00:29:42.483 13:13:45 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:42.483 00:29:42.483 13:13:45 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:42.483 13:13:45 -- bdevperf/common.sh@19 -- # echo 00:29:42.483 13:13:45 -- bdevperf/common.sh@20 -- # cat 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@39 -- # create_job job1 00:29:42.483 00:29:42.483 13:13:45 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:42.483 13:13:45 -- bdevperf/common.sh@9 -- # local rw= 00:29:42.483 13:13:45 -- bdevperf/common.sh@10 -- # local filename= 00:29:42.483 13:13:45 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:42.483 13:13:45 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:42.483 13:13:45 -- bdevperf/common.sh@19 -- # echo 00:29:42.483 13:13:45 -- bdevperf/common.sh@20 -- # cat 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@40 -- # create_job job2 00:29:42.483 00:29:42.483 13:13:45 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:42.483 13:13:45 -- bdevperf/common.sh@9 -- # local rw= 00:29:42.483 13:13:45 -- bdevperf/common.sh@10 -- # local filename= 00:29:42.483 13:13:45 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:42.483 13:13:45 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:42.483 13:13:45 -- bdevperf/common.sh@19 -- # echo 00:29:42.483 13:13:45 -- bdevperf/common.sh@20 -- # cat 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@41 -- # create_job job3 00:29:42.483 00:29:42.483 13:13:45 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:42.483 13:13:45 -- bdevperf/common.sh@9 -- # local rw= 00:29:42.483 13:13:45 -- bdevperf/common.sh@10 -- # local filename= 00:29:42.483 13:13:45 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:42.483 13:13:45 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:42.483 13:13:45 -- bdevperf/common.sh@19 -- # echo 00:29:42.483 13:13:45 -- bdevperf/common.sh@20 -- # cat 00:29:42.483 13:13:45 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:46.676 13:13:49 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-17 13:13:45.728858] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:46.676 [2024-04-17 13:13:45.729098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141878 ] 00:29:46.676 Using job config with 4 jobs 00:29:46.676 [2024-04-17 13:13:45.895712] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.676 [2024-04-17 13:13:46.119467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.676 [2024-04-17 13:13:46.120845] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.676 [2024-04-17 13:13:46.606175] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.676 cpumask for '\''job0'\'' is too big 00:29:46.676 cpumask for '\''job1'\'' is too big 00:29:46.676 cpumask for '\''job2'\'' is too big 00:29:46.676 cpumask for '\''job3'\'' is too big 00:29:46.676 Running I/O for 2 seconds... 00:29:46.676 00:29:46.676 Latency(us) 00:29:46.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13248.62 12.94 0.00 0.00 19306.42 4021.53 31457.28 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.03 13238.66 12.93 0.00 0.00 19302.21 4527.94 31218.97 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13228.92 12.92 0.00 0.00 19248.88 3723.64 27286.81 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.03 13218.91 12.91 0.00 0.00 19245.85 4379.00 27167.65 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13208.90 12.90 0.00 0.00 19192.34 3813.00 25976.09 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.04 13282.73 12.97 0.00 0.00 19068.30 4468.36 26095.24 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.04 13273.22 12.96 0.00 0.00 19018.47 3530.01 26095.24 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.05 13263.52 12.95 0.00 0.00 19017.82 3753.43 26214.40 00:29:46.676 =================================================================================================================== 00:29:46.676 Total : 105963.48 103.48 0.00 0.00 19174.54 3530.01 31457.28' 00:29:46.676 13:13:49 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-17 13:13:45.728858] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:46.676 [2024-04-17 13:13:45.729098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141878 ] 00:29:46.676 Using job config with 4 jobs 00:29:46.676 [2024-04-17 13:13:45.895712] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.676 [2024-04-17 13:13:46.119467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.676 [2024-04-17 13:13:46.120845] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.676 [2024-04-17 13:13:46.606175] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.676 cpumask for '\''job0'\'' is too big 00:29:46.676 cpumask for '\''job1'\'' is too big 00:29:46.676 cpumask for '\''job2'\'' is too big 00:29:46.676 cpumask for '\''job3'\'' is too big 00:29:46.676 Running I/O for 2 seconds... 00:29:46.676 00:29:46.676 Latency(us) 00:29:46.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13248.62 12.94 0.00 0.00 19306.42 4021.53 31457.28 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.03 13238.66 12.93 0.00 0.00 19302.21 4527.94 31218.97 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13228.92 12.92 0.00 0.00 19248.88 3723.64 27286.81 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.03 13218.91 12.91 0.00 0.00 19245.85 4379.00 27167.65 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc0 : 2.03 13208.90 12.90 0.00 0.00 19192.34 3813.00 25976.09 00:29:46.676 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.676 Malloc1 : 2.04 13282.73 12.97 0.00 0.00 19068.30 4468.36 26095.24 00:29:46.676 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc0 : 2.04 13273.22 12.96 0.00 0.00 19018.47 3530.01 26095.24 00:29:46.677 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc1 : 2.05 13263.52 12.95 0.00 0.00 19017.82 3753.43 26214.40 00:29:46.677 =================================================================================================================== 00:29:46.677 Total : 105963.48 103.48 0.00 0.00 19174.54 3530.01 31457.28' 00:29:46.677 13:13:49 -- bdevperf/common.sh@32 -- # echo '[2024-04-17 13:13:45.728858] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:46.677 [2024-04-17 13:13:45.729098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141878 ] 00:29:46.677 Using job config with 4 jobs 00:29:46.677 [2024-04-17 13:13:45.895712] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.677 [2024-04-17 13:13:46.119467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.677 [2024-04-17 13:13:46.120845] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.677 [2024-04-17 13:13:46.606175] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:29:46.677 cpumask for '\''job0'\'' is too big 00:29:46.677 cpumask for '\''job1'\'' is too big 00:29:46.677 cpumask for '\''job2'\'' is too big 00:29:46.677 cpumask for '\''job3'\'' is too big 00:29:46.677 Running I/O for 2 seconds... 00:29:46.677 00:29:46.677 Latency(us) 00:29:46.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.677 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc0 : 2.03 13248.62 12.94 0.00 0.00 19306.42 4021.53 31457.28 00:29:46.677 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc1 : 2.03 13238.66 12.93 0.00 0.00 19302.21 4527.94 31218.97 00:29:46.677 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc0 : 2.03 13228.92 12.92 0.00 0.00 19248.88 3723.64 27286.81 00:29:46.677 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc1 : 2.03 13218.91 12.91 0.00 0.00 19245.85 4379.00 27167.65 00:29:46.677 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc0 : 2.03 13208.90 12.90 0.00 0.00 19192.34 3813.00 25976.09 00:29:46.677 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc1 : 2.04 13282.73 12.97 0.00 0.00 19068.30 4468.36 26095.24 00:29:46.677 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc0 : 2.04 13273.22 12.96 0.00 0.00 19018.47 3530.01 26095.24 00:29:46.677 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:29:46.677 Malloc1 : 2.05 13263.52 12.95 0.00 0.00 19017.82 3753.43 26214.40 00:29:46.677 =================================================================================================================== 00:29:46.677 Total : 105963.48 103.48 0.00 0.00 19174.54 3530.01 31457.28' 00:29:46.677 13:13:49 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:46.677 13:13:49 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:46.677 13:13:49 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:29:46.677 13:13:49 -- bdevperf/test_config.sh@44 -- # cleanup 00:29:46.677 13:13:49 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:46.677 13:13:49 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:46.677 00:29:46.677 real 0m17.583s 00:29:46.677 user 0m15.885s 00:29:46.677 sys 0m1.141s 00:29:46.677 13:13:49 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:29:46.677 13:13:49 -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 ************************************ 00:29:46.677 END TEST bdevperf_config 00:29:46.677 ************************************ 00:29:46.677 13:13:50 -- spdk/autotest.sh@187 -- # uname -s 00:29:46.677 13:13:50 -- spdk/autotest.sh@187 -- # [[ Linux == Linux ]] 00:29:46.677 13:13:50 -- spdk/autotest.sh@188 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:46.677 13:13:50 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:46.677 13:13:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:46.677 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 ************************************ 00:29:46.677 START TEST reactor_set_interrupt 00:29:46.677 ************************************ 00:29:46.677 13:13:50 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:46.677 * Looking for test storage... 00:29:46.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.677 13:13:50 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:46.677 13:13:50 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:46.677 13:13:50 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:46.677 13:13:50 -- common/autotest_common.sh@34 -- # set -e 00:29:46.677 13:13:50 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:46.677 13:13:50 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:46.677 13:13:50 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:46.677 13:13:50 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:46.677 13:13:50 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:46.677 13:13:50 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:46.677 13:13:50 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:29:46.677 13:13:50 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:29:46.677 13:13:50 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:29:46.677 13:13:50 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:29:46.677 13:13:50 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:29:46.677 13:13:50 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:29:46.677 13:13:50 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:29:46.677 13:13:50 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:29:46.677 13:13:50 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:29:46.677 13:13:50 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:29:46.677 13:13:50 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:29:46.677 13:13:50 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:29:46.677 13:13:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:29:46.677 13:13:50 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:29:46.677 13:13:50 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:29:46.677 13:13:50 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:29:46.677 13:13:50 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:29:46.677 13:13:50 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:29:46.677 13:13:50 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:46.677 13:13:50 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:29:46.677 13:13:50 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:29:46.677 13:13:50 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:29:46.677 13:13:50 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:29:46.677 13:13:50 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:46.677 13:13:50 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:29:46.677 13:13:50 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:29:46.677 13:13:50 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:29:46.677 13:13:50 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:29:46.677 13:13:50 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:29:46.677 13:13:50 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:29:46.677 13:13:50 -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:29:46.677 13:13:50 -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:29:46.677 13:13:50 -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:46.677 13:13:50 -- common/build_config.sh@39 -- # CONFIG_ASAN=y 00:29:46.677 13:13:50 -- common/build_config.sh@40 -- # CONFIG_SHARED=n 00:29:46.677 13:13:50 -- common/build_config.sh@41 -- # CONFIG_VTUNE_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@42 -- # CONFIG_RDMA_SET_TOS=y 00:29:46.677 13:13:50 -- common/build_config.sh@43 -- # CONFIG_VBDEV_COMPRESS=n 00:29:46.677 13:13:50 -- common/build_config.sh@44 -- # CONFIG_VFIO_USER_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@45 -- # CONFIG_PGO_DIR= 00:29:46.677 13:13:50 -- common/build_config.sh@46 -- # CONFIG_FUZZER_LIB= 00:29:46.677 13:13:50 -- common/build_config.sh@47 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:46.677 13:13:50 -- common/build_config.sh@48 -- # CONFIG_USDT=n 00:29:46.677 13:13:50 -- common/build_config.sh@49 -- # CONFIG_HAVE_KEYUTILS=y 00:29:46.677 13:13:50 -- common/build_config.sh@50 -- # CONFIG_URING_ZNS=n 00:29:46.677 13:13:50 -- common/build_config.sh@51 -- # CONFIG_FC_PATH= 00:29:46.677 13:13:50 -- common/build_config.sh@52 -- # CONFIG_COVERAGE=y 00:29:46.677 13:13:50 -- common/build_config.sh@53 -- # CONFIG_CUSTOMOCF=n 00:29:46.677 13:13:50 -- common/build_config.sh@54 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:46.677 13:13:50 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:29:46.677 13:13:50 -- common/build_config.sh@56 -- # CONFIG_DEBUG=y 00:29:46.678 13:13:50 -- common/build_config.sh@57 -- # CONFIG_RDMA=y 00:29:46.678 13:13:50 -- common/build_config.sh@58 -- # CONFIG_HAVE_ARC4RANDOM=n 00:29:46.678 13:13:50 -- common/build_config.sh@59 -- # CONFIG_FUZZER=n 00:29:46.678 13:13:50 -- common/build_config.sh@60 -- # CONFIG_FC=n 00:29:46.678 13:13:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:29:46.678 13:13:50 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:46.678 13:13:50 -- common/build_config.sh@63 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:46.678 13:13:50 -- common/build_config.sh@64 -- # CONFIG_CROSS_PREFIX= 00:29:46.678 13:13:50 -- common/build_config.sh@65 -- # CONFIG_PREFIX=/usr/local 00:29:46.678 13:13:50 -- common/build_config.sh@66 -- # CONFIG_HAVE_LIBBSD=n 00:29:46.678 13:13:50 -- common/build_config.sh@67 -- # CONFIG_UBSAN=y 00:29:46.678 13:13:50 -- common/build_config.sh@68 -- # CONFIG_PGO_CAPTURE=n 00:29:46.678 13:13:50 -- common/build_config.sh@69 -- # CONFIG_UBLK=n 00:29:46.678 13:13:50 -- common/build_config.sh@70 -- # CONFIG_ISAL_CRYPTO=y 00:29:46.678 13:13:50 -- common/build_config.sh@71 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:46.678 13:13:50 -- common/build_config.sh@72 -- # CONFIG_CRYPTO=n 00:29:46.678 13:13:50 -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:29:46.678 13:13:50 -- common/build_config.sh@74 -- # CONFIG_LIBDIR= 00:29:46.678 13:13:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB_DIR= 00:29:46.678 13:13:50 -- common/build_config.sh@76 -- # CONFIG_PGO_USE=n 00:29:46.678 13:13:50 -- common/build_config.sh@77 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:46.678 13:13:50 -- common/build_config.sh@78 -- # CONFIG_GOLANG=n 00:29:46.678 13:13:50 -- common/build_config.sh@79 -- # CONFIG_VHOST=y 00:29:46.678 13:13:50 -- common/build_config.sh@80 -- # CONFIG_IDXD=y 00:29:46.678 13:13:50 -- common/build_config.sh@81 -- # CONFIG_AVAHI=n 00:29:46.678 13:13:50 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:29:46.678 13:13:50 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:46.678 13:13:50 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:46.678 13:13:50 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:46.678 13:13:50 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:46.678 13:13:50 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:46.678 13:13:50 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:46.678 13:13:50 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:46.678 13:13:50 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:46.678 13:13:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:46.678 13:13:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:46.678 13:13:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:46.678 13:13:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:46.678 13:13:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:46.678 13:13:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:46.678 13:13:50 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:46.678 13:13:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:46.678 #define SPDK_CONFIG_H 00:29:46.678 #define SPDK_CONFIG_APPS 1 00:29:46.678 #define SPDK_CONFIG_ARCH native 00:29:46.678 #define SPDK_CONFIG_ASAN 1 00:29:46.678 #undef SPDK_CONFIG_AVAHI 00:29:46.678 #undef SPDK_CONFIG_CET 00:29:46.678 #define SPDK_CONFIG_COVERAGE 1 00:29:46.678 #define SPDK_CONFIG_CROSS_PREFIX 00:29:46.678 #undef SPDK_CONFIG_CRYPTO 00:29:46.678 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:46.678 #undef SPDK_CONFIG_CUSTOMOCF 00:29:46.678 #undef SPDK_CONFIG_DAOS 00:29:46.678 #define SPDK_CONFIG_DAOS_DIR 00:29:46.678 #define SPDK_CONFIG_DEBUG 1 00:29:46.678 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:46.678 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:46.678 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:46.678 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:46.678 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:46.678 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:46.678 #define SPDK_CONFIG_EXAMPLES 1 00:29:46.678 #undef SPDK_CONFIG_FC 00:29:46.678 #define SPDK_CONFIG_FC_PATH 00:29:46.678 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:46.678 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:46.678 #undef SPDK_CONFIG_FUSE 00:29:46.678 #undef SPDK_CONFIG_FUZZER 00:29:46.678 #define SPDK_CONFIG_FUZZER_LIB 00:29:46.678 #undef SPDK_CONFIG_GOLANG 00:29:46.678 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:29:46.678 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:29:46.678 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:46.678 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:46.678 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:46.678 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:46.678 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:46.678 #define SPDK_CONFIG_IDXD 1 00:29:46.678 #undef SPDK_CONFIG_IDXD_KERNEL 00:29:46.678 #undef SPDK_CONFIG_IPSEC_MB 00:29:46.678 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:46.678 #define SPDK_CONFIG_ISAL 1 00:29:46.678 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:46.678 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:46.678 #define SPDK_CONFIG_LIBDIR 00:29:46.678 #undef SPDK_CONFIG_LTO 00:29:46.678 #define SPDK_CONFIG_MAX_LCORES 00:29:46.678 #define SPDK_CONFIG_NVME_CUSE 1 00:29:46.678 #undef SPDK_CONFIG_OCF 00:29:46.678 #define SPDK_CONFIG_OCF_PATH 00:29:46.678 #define SPDK_CONFIG_OPENSSL_PATH 00:29:46.678 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:46.678 #define SPDK_CONFIG_PGO_DIR 00:29:46.678 #undef SPDK_CONFIG_PGO_USE 00:29:46.678 #define SPDK_CONFIG_PREFIX /usr/local 00:29:46.678 #define SPDK_CONFIG_RAID5F 1 00:29:46.678 #undef SPDK_CONFIG_RBD 00:29:46.678 #define SPDK_CONFIG_RDMA 1 00:29:46.678 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:46.678 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:46.678 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:46.678 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:46.678 #undef SPDK_CONFIG_SHARED 00:29:46.678 #undef SPDK_CONFIG_SMA 00:29:46.678 #define SPDK_CONFIG_TESTS 1 00:29:46.678 #undef SPDK_CONFIG_TSAN 00:29:46.678 #undef SPDK_CONFIG_UBLK 00:29:46.678 #define SPDK_CONFIG_UBSAN 1 00:29:46.678 #define SPDK_CONFIG_UNIT_TESTS 1 00:29:46.678 #undef SPDK_CONFIG_URING 00:29:46.678 #define SPDK_CONFIG_URING_PATH 00:29:46.678 #undef SPDK_CONFIG_URING_ZNS 00:29:46.678 #undef SPDK_CONFIG_USDT 00:29:46.678 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:46.678 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:46.678 #undef SPDK_CONFIG_VFIO_USER 00:29:46.678 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:46.678 #define SPDK_CONFIG_VHOST 1 00:29:46.678 #define SPDK_CONFIG_VIRTIO 1 00:29:46.678 #undef SPDK_CONFIG_VTUNE 00:29:46.678 #define SPDK_CONFIG_VTUNE_DIR 00:29:46.678 #define SPDK_CONFIG_WERROR 1 00:29:46.678 #define SPDK_CONFIG_WPDK_DIR 00:29:46.678 #undef SPDK_CONFIG_XNVME 00:29:46.678 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:46.678 13:13:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:46.678 13:13:50 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:46.678 13:13:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.678 13:13:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.678 13:13:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.678 13:13:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:46.678 13:13:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:46.678 13:13:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:46.678 13:13:50 -- paths/export.sh@5 -- # export PATH 00:29:46.678 13:13:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:46.678 13:13:50 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:46.678 13:13:50 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:46.678 13:13:50 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:46.678 13:13:50 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:46.678 13:13:50 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:46.678 13:13:50 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:46.678 13:13:50 -- pm/common@67 -- # TEST_TAG=N/A 00:29:46.678 13:13:50 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:46.678 13:13:50 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:29:46.678 13:13:50 -- pm/common@71 -- # uname -s 00:29:46.678 13:13:50 -- pm/common@71 -- # PM_OS=Linux 00:29:46.678 13:13:50 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:46.678 13:13:50 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:29:46.678 13:13:50 -- pm/common@76 -- # [[ Linux == Linux ]] 00:29:46.678 13:13:50 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:29:46.678 13:13:50 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:29:46.678 13:13:50 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:29:46.678 13:13:50 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:29:46.678 13:13:50 -- common/autotest_common.sh@57 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:29:46.679 13:13:50 -- common/autotest_common.sh@61 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:46.679 13:13:50 -- common/autotest_common.sh@63 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:29:46.679 13:13:50 -- common/autotest_common.sh@65 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:46.679 13:13:50 -- common/autotest_common.sh@67 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:29:46.679 13:13:50 -- common/autotest_common.sh@69 -- # : 00:29:46.679 13:13:50 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:29:46.679 13:13:50 -- common/autotest_common.sh@71 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:29:46.679 13:13:50 -- common/autotest_common.sh@73 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:29:46.679 13:13:50 -- common/autotest_common.sh@75 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:29:46.679 13:13:50 -- common/autotest_common.sh@77 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:46.679 13:13:50 -- common/autotest_common.sh@79 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:29:46.679 13:13:50 -- common/autotest_common.sh@81 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:29:46.679 13:13:50 -- common/autotest_common.sh@83 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:29:46.679 13:13:50 -- common/autotest_common.sh@85 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:29:46.679 13:13:50 -- common/autotest_common.sh@87 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:29:46.679 13:13:50 -- common/autotest_common.sh@89 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:29:46.679 13:13:50 -- common/autotest_common.sh@91 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:29:46.679 13:13:50 -- common/autotest_common.sh@93 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:29:46.679 13:13:50 -- common/autotest_common.sh@95 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:46.679 13:13:50 -- common/autotest_common.sh@97 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:29:46.679 13:13:50 -- common/autotest_common.sh@99 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:29:46.679 13:13:50 -- common/autotest_common.sh@101 -- # : rdma 00:29:46.679 13:13:50 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:46.679 13:13:50 -- common/autotest_common.sh@103 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:29:46.679 13:13:50 -- common/autotest_common.sh@105 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:29:46.679 13:13:50 -- common/autotest_common.sh@107 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:29:46.679 13:13:50 -- common/autotest_common.sh@109 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:29:46.679 13:13:50 -- common/autotest_common.sh@111 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:29:46.679 13:13:50 -- common/autotest_common.sh@113 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:29:46.679 13:13:50 -- common/autotest_common.sh@115 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:29:46.679 13:13:50 -- common/autotest_common.sh@117 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:46.679 13:13:50 -- common/autotest_common.sh@119 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:29:46.679 13:13:50 -- common/autotest_common.sh@121 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:29:46.679 13:13:50 -- common/autotest_common.sh@123 -- # : 00:29:46.679 13:13:50 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:46.679 13:13:50 -- common/autotest_common.sh@125 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:29:46.679 13:13:50 -- common/autotest_common.sh@127 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:29:46.679 13:13:50 -- common/autotest_common.sh@129 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:29:46.679 13:13:50 -- common/autotest_common.sh@131 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:29:46.679 13:13:50 -- common/autotest_common.sh@133 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:29:46.679 13:13:50 -- common/autotest_common.sh@135 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:29:46.679 13:13:50 -- common/autotest_common.sh@137 -- # : 00:29:46.679 13:13:50 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:29:46.679 13:13:50 -- common/autotest_common.sh@139 -- # : true 00:29:46.679 13:13:50 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:29:46.679 13:13:50 -- common/autotest_common.sh@141 -- # : 1 00:29:46.679 13:13:50 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:29:46.679 13:13:50 -- common/autotest_common.sh@143 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:29:46.679 13:13:50 -- common/autotest_common.sh@145 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:29:46.679 13:13:50 -- common/autotest_common.sh@147 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:29:46.679 13:13:50 -- common/autotest_common.sh@149 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:29:46.679 13:13:50 -- common/autotest_common.sh@151 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:29:46.679 13:13:50 -- common/autotest_common.sh@153 -- # : 00:29:46.679 13:13:50 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:29:46.679 13:13:50 -- common/autotest_common.sh@155 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:29:46.679 13:13:50 -- common/autotest_common.sh@157 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:29:46.679 13:13:50 -- common/autotest_common.sh@159 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:29:46.679 13:13:50 -- common/autotest_common.sh@161 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:29:46.679 13:13:50 -- common/autotest_common.sh@163 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:29:46.679 13:13:50 -- common/autotest_common.sh@166 -- # : 00:29:46.679 13:13:50 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:29:46.679 13:13:50 -- common/autotest_common.sh@168 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:29:46.679 13:13:50 -- common/autotest_common.sh@170 -- # : 0 00:29:46.679 13:13:50 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:46.679 13:13:50 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:46.679 13:13:50 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:46.679 13:13:50 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:46.679 13:13:50 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:46.679 13:13:50 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:46.679 13:13:50 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:46.679 13:13:50 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:29:46.679 13:13:50 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:46.679 13:13:50 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:46.679 13:13:50 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:46.679 13:13:50 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:46.679 13:13:50 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:46.679 13:13:50 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:29:46.679 13:13:50 -- common/autotest_common.sh@199 -- # cat 00:29:46.679 13:13:50 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:29:46.679 13:13:50 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:46.680 13:13:50 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:46.680 13:13:50 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:46.680 13:13:50 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:46.680 13:13:50 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:29:46.680 13:13:50 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:29:46.680 13:13:50 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:46.680 13:13:50 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:46.680 13:13:50 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:46.680 13:13:50 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:46.680 13:13:50 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:29:46.680 13:13:50 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:29:46.680 13:13:50 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:46.680 13:13:50 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:46.680 13:13:50 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:46.680 13:13:50 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:46.680 13:13:50 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:46.680 13:13:50 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:46.680 13:13:50 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:29:46.680 13:13:50 -- common/autotest_common.sh@252 -- # export valgrind= 00:29:46.680 13:13:50 -- common/autotest_common.sh@252 -- # valgrind= 00:29:46.680 13:13:50 -- common/autotest_common.sh@258 -- # uname -s 00:29:46.680 13:13:50 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:29:46.680 13:13:50 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:29:46.680 13:13:50 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:29:46.680 13:13:50 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:29:46.680 13:13:50 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@268 -- # MAKE=make 00:29:46.680 13:13:50 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:29:46.680 13:13:50 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:29:46.680 13:13:50 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:29:46.680 13:13:50 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:29:46.680 13:13:50 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:29:46.680 13:13:50 -- common/autotest_common.sh@307 -- # [[ -z 141979 ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@307 -- # kill -0 141979 00:29:46.680 13:13:50 -- common/autotest_common.sh@1654 -- # set_test_storage 2147483648 00:29:46.680 13:13:50 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:29:46.680 13:13:50 -- common/autotest_common.sh@320 -- # local mount target_dir 00:29:46.680 13:13:50 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:29:46.680 13:13:50 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:29:46.680 13:13:50 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:29:46.680 13:13:50 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:29:46.680 13:13:50 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.5GbNOA 00:29:46.680 13:13:50 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:46.680 13:13:50 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:29:46.680 13:13:50 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.5GbNOA/tests/interrupt /tmp/spdk.5GbNOA 00:29:46.680 13:13:50 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@316 -- # df -T 00:29:46.680 13:13:50 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=6224465920 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6224465920 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=1249763328 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254514688 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=4751360 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=10598133760 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=10001883136 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=6269952000 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=6272565248 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=103089152 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=41025536 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=41025536 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=1254510592 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254510592 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=93199785984 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=6502993920 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop3 00:29:46.680 13:13:50 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:46.680 13:13:50 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:29:46.680 13:13:50 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:29:46.680 13:13:50 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:46.680 13:13:50 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:29:46.680 * Looking for test storage... 00:29:46.680 13:13:50 -- common/autotest_common.sh@357 -- # local target_space new_size 00:29:46.681 13:13:50 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:29:46.681 13:13:50 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.681 13:13:50 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:46.681 13:13:50 -- common/autotest_common.sh@361 -- # mount=/ 00:29:46.681 13:13:50 -- common/autotest_common.sh@363 -- # target_space=10598133760 00:29:46.681 13:13:50 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:29:46.681 13:13:50 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:29:46.681 13:13:50 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:29:46.681 13:13:50 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:29:46.681 13:13:50 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:29:46.681 13:13:50 -- common/autotest_common.sh@370 -- # new_size=12216475648 00:29:46.681 13:13:50 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:46.681 13:13:50 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.681 13:13:50 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.681 13:13:50 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:46.681 13:13:50 -- common/autotest_common.sh@378 -- # return 0 00:29:46.681 13:13:50 -- common/autotest_common.sh@1656 -- # set -o errtrace 00:29:46.681 13:13:50 -- common/autotest_common.sh@1657 -- # shopt -s extdebug 00:29:46.681 13:13:50 -- common/autotest_common.sh@1658 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:46.681 13:13:50 -- common/autotest_common.sh@1660 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:46.681 13:13:50 -- common/autotest_common.sh@1661 -- # true 00:29:46.681 13:13:50 -- common/autotest_common.sh@1663 -- # xtrace_fd 00:29:46.681 13:13:50 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:46.681 13:13:50 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:46.681 13:13:50 -- common/autotest_common.sh@27 -- # exec 00:29:46.681 13:13:50 -- common/autotest_common.sh@29 -- # exec 00:29:46.681 13:13:50 -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:46.681 13:13:50 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:46.681 13:13:50 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:46.681 13:13:50 -- common/autotest_common.sh@18 -- # set -x 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:46.681 13:13:50 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:46.681 13:13:50 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:46.681 13:13:50 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142023 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142023 /var/tmp/spdk.sock 00:29:46.681 13:13:50 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:46.681 13:13:50 -- common/autotest_common.sh@817 -- # '[' -z 142023 ']' 00:29:46.681 13:13:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.681 13:13:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:46.681 13:13:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.681 13:13:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:46.681 13:13:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.681 [2024-04-17 13:13:50.299768] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:46.681 [2024-04-17 13:13:50.299962] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142023 ] 00:29:46.681 [2024-04-17 13:13:50.470126] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:46.681 [2024-04-17 13:13:50.677782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.681 [2024-04-17 13:13:50.677966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.681 [2024-04-17 13:13:50.677963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.940 [2024-04-17 13:13:50.966690] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.198 13:13:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:47.198 13:13:51 -- common/autotest_common.sh@850 -- # return 0 00:29:47.198 13:13:51 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:29:47.198 13:13:51 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:47.458 Malloc0 00:29:47.458 Malloc1 00:29:47.458 Malloc2 00:29:47.458 13:13:51 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:29:47.458 13:13:51 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:47.458 13:13:51 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:47.458 13:13:51 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:47.458 5000+0 records in 00:29:47.458 5000+0 records out 00:29:47.458 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171835 s, 596 MB/s 00:29:47.458 13:13:51 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:47.716 AIO0 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 142023 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 142023 without_thd 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142023 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:47.975 13:13:51 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:47.975 13:13:51 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:29:48.233 13:13:52 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:48.233 13:13:52 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:48.233 13:13:52 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:29:48.508 spdk_thread ids are 1 on reactor0. 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142023 0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142023 0 idle 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142023 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.74 reactor_0' 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@48 -- # echo 142023 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.74 reactor_0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:48.508 13:13:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142023 1 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142023 1 idle 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:48.508 13:13:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142026 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.00 reactor_1' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # echo 142026 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.00 reactor_1 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:48.796 13:13:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:48.796 13:13:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142023 2 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142023 2 idle 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142027 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.00 reactor_2' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # echo 142027 root 20 0 20.1t 146480 29260 S 0.0 1.2 0:00.00 reactor_2 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:48.796 13:13:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:48.796 13:13:52 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:29:48.796 13:13:52 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:29:48.796 13:13:52 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:29:49.054 [2024-04-17 13:13:53.115503] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:49.054 13:13:53 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:49.313 [2024-04-17 13:13:53.391253] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:49.313 [2024-04-17 13:13:53.391995] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:49.313 13:13:53 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:49.571 [2024-04-17 13:13:53.639006] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:49.571 [2024-04-17 13:13:53.639682] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:49.571 13:13:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:49.571 13:13:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142023 0 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142023 0 busy 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:49.571 13:13:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142023 root 20 0 20.1t 146600 29260 R 99.9 1.2 0:01.16 reactor_0' 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@48 -- # echo 142023 root 20 0 20.1t 146600 29260 R 99.9 1.2 0:01.16 reactor_0 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:49.830 13:13:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:49.830 13:13:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142023 2 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142023 2 busy 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:49.830 13:13:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142027 root 20 0 20.1t 146600 29260 R 99.9 1.2 0:00.33 reactor_2' 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@48 -- # echo 142027 root 20 0 20.1t 146600 29260 R 99.9 1.2 0:00.33 reactor_2 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:50.088 13:13:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:50.088 13:13:53 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:50.088 [2024-04-17 13:13:54.207184] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:50.088 [2024-04-17 13:13:54.207917] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:50.088 13:13:54 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:29:50.088 13:13:54 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142023 2 00:29:50.088 13:13:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142023 2 idle 00:29:50.088 13:13:54 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:50.088 13:13:54 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:50.088 13:13:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:50.088 13:13:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:50.089 13:13:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142027 root 20 0 20.1t 146664 29260 S 0.0 1.2 0:00.56 reactor_2' 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@48 -- # echo 142027 root 20 0 20.1t 146664 29260 S 0.0 1.2 0:00.56 reactor_2 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:50.354 13:13:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:50.354 13:13:54 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:50.612 [2024-04-17 13:13:54.667031] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:50.612 [2024-04-17 13:13:54.667675] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:50.612 13:13:54 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:29:50.612 13:13:54 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:29:50.612 13:13:54 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:29:50.869 [2024-04-17 13:13:54.895588] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.869 13:13:54 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142023 0 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142023 0 idle 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@33 -- # local pid=142023 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142023 -w 256 00:29:50.869 13:13:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142023 root 20 0 20.1t 146756 29260 S 0.0 1.2 0:02.02 reactor_0' 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@48 -- # echo 142023 root 20 0 20.1t 146756 29260 S 0.0 1.2 0:02.02 reactor_0 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:51.174 13:13:55 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:51.174 13:13:55 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:51.174 13:13:55 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:29:51.174 13:13:55 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:29:51.174 13:13:55 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 142023 00:29:51.174 13:13:55 -- common/autotest_common.sh@924 -- # '[' -z 142023 ']' 00:29:51.174 13:13:55 -- common/autotest_common.sh@928 -- # kill -0 142023 00:29:51.174 13:13:55 -- common/autotest_common.sh@929 -- # uname 00:29:51.174 13:13:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:51.174 13:13:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142023 00:29:51.174 13:13:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:51.174 killing process with pid 142023 00:29:51.174 13:13:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:51.174 13:13:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142023' 00:29:51.174 13:13:55 -- common/autotest_common.sh@943 -- # kill 142023 00:29:51.174 13:13:55 -- common/autotest_common.sh@948 -- # wait 142023 00:29:52.552 13:13:56 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:52.552 13:13:56 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142190 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:52.552 13:13:56 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142190 /var/tmp/spdk.sock 00:29:52.552 13:13:56 -- common/autotest_common.sh@817 -- # '[' -z 142190 ']' 00:29:52.552 13:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.552 13:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:52.552 13:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.552 13:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:52.552 13:13:56 -- common/autotest_common.sh@10 -- # set +x 00:29:52.552 [2024-04-17 13:13:56.541667] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:52.552 [2024-04-17 13:13:56.541899] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142190 ] 00:29:52.810 [2024-04-17 13:13:56.734196] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.810 [2024-04-17 13:13:56.943870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.810 [2024-04-17 13:13:56.944035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.810 [2024-04-17 13:13:56.944038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.378 [2024-04-17 13:13:57.234820] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.378 13:13:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:53.378 13:13:57 -- common/autotest_common.sh@850 -- # return 0 00:29:53.378 13:13:57 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:29:53.378 13:13:57 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:53.945 Malloc0 00:29:53.945 Malloc1 00:29:53.945 Malloc2 00:29:53.945 13:13:57 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:29:53.945 13:13:57 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:53.945 13:13:57 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:53.945 13:13:57 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:53.945 5000+0 records in 00:29:53.945 5000+0 records out 00:29:53.945 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0277055 s, 370 MB/s 00:29:53.945 13:13:57 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:29:54.203 AIO0 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 142190 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 142190 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142190 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:54.203 13:13:58 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:54.203 13:13:58 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.462 13:13:58 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:29:54.462 13:13:58 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:54.462 13:13:58 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:54.462 13:13:58 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:29:54.462 13:13:58 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:29:54.462 13:13:58 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:29:54.463 13:13:58 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.463 13:13:58 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:29:54.463 13:13:58 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:29:54.721 spdk_thread ids are 1 on reactor0. 00:29:54.721 13:13:58 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:54.721 13:13:58 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:54.721 13:13:58 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.721 13:13:58 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142190 0 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142190 0 idle 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:54.721 13:13:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142190 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.75 reactor_0' 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@48 -- # echo 142190 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.75 reactor_0 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:54.722 13:13:58 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.722 13:13:58 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142190 1 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142190 1 idle 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:54.722 13:13:58 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142194 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.00 reactor_1' 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@48 -- # echo 142194 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.00 reactor_1 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:54.980 13:13:59 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:54.980 13:13:59 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142190 2 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142190 2 idle 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:54.980 13:13:59 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142195 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.00 reactor_2' 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@48 -- # echo 142195 root 20 0 20.1t 143600 29068 S 0.0 1.2 0:00.00 reactor_2 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:55.238 13:13:59 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:55.238 13:13:59 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:29:55.238 13:13:59 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:55.497 [2024-04-17 13:13:59.441143] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:55.497 [2024-04-17 13:13:59.441436] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:29:55.497 [2024-04-17 13:13:59.441900] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:55.497 13:13:59 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:55.756 [2024-04-17 13:13:59.713055] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:55.756 [2024-04-17 13:13:59.713690] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:55.756 13:13:59 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:55.756 13:13:59 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142190 0 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142190 0 busy 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142190 root 20 0 20.1t 143684 29068 R 99.9 1.2 0:01.20 reactor_0' 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@48 -- # echo 142190 root 20 0 20.1t 143684 29068 R 99.9 1.2 0:01.20 reactor_0 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:55.756 13:13:59 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:55.756 13:13:59 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142190 2 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142190 2 busy 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:55.756 13:13:59 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142195 root 20 0 20.1t 143684 29068 R 93.8 1.2 0:00.33 reactor_2' 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@48 -- # echo 142195 root 20 0 20.1t 143684 29068 R 93.8 1.2 0:00.33 reactor_2 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:29:56.015 13:14:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:56.015 13:14:00 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:56.274 [2024-04-17 13:14:00.321261] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:56.274 [2024-04-17 13:14:00.321621] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:56.274 13:14:00 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:29:56.274 13:14:00 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142190 2 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142190 2 idle 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:56.274 13:14:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142195 root 20 0 20.1t 143748 29068 S 0.0 1.2 0:00.60 reactor_2' 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@48 -- # echo 142195 root 20 0 20.1t 143748 29068 S 0.0 1.2 0:00.60 reactor_2 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:56.533 13:14:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:56.533 13:14:00 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:56.792 [2024-04-17 13:14:00.769387] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:56.792 [2024-04-17 13:14:00.770452] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:29:56.792 [2024-04-17 13:14:00.770505] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:56.792 13:14:00 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:29:56.792 13:14:00 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142190 0 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142190 0 idle 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@33 -- # local pid=142190 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142190 -w 256 00:29:56.792 13:14:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142190 root 20 0 20.1t 143788 29068 S 0.0 1.2 0:02.09 reactor_0' 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@48 -- # echo 142190 root 20 0 20.1t 143788 29068 S 0.0 1.2 0:02.09 reactor_0 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:29:57.058 13:14:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:29:57.058 13:14:00 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:57.058 13:14:00 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:29:57.058 13:14:00 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:57.058 13:14:00 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 142190 00:29:57.058 13:14:00 -- common/autotest_common.sh@924 -- # '[' -z 142190 ']' 00:29:57.058 13:14:00 -- common/autotest_common.sh@928 -- # kill -0 142190 00:29:57.058 13:14:00 -- common/autotest_common.sh@929 -- # uname 00:29:57.058 13:14:00 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:57.058 13:14:00 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142190 00:29:57.058 killing process with pid 142190 00:29:57.058 13:14:00 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:57.058 13:14:00 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:57.058 13:14:00 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142190' 00:29:57.058 13:14:00 -- common/autotest_common.sh@943 -- # kill 142190 00:29:57.058 13:14:00 -- common/autotest_common.sh@948 -- # wait 142190 00:29:58.453 13:14:02 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:29:58.453 00:29:58.453 real 0m12.273s 00:29:58.453 user 0m12.872s 00:29:58.453 sys 0m1.574s 00:29:58.453 13:14:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:29:58.453 ************************************ 00:29:58.453 END TEST reactor_set_interrupt 00:29:58.453 ************************************ 00:29:58.453 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:58.453 13:14:02 -- spdk/autotest.sh@189 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:58.453 13:14:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:29:58.453 13:14:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:58.453 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:58.453 ************************************ 00:29:58.453 START TEST reap_unregistered_poller 00:29:58.453 ************************************ 00:29:58.453 13:14:02 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:58.453 * Looking for test storage... 00:29:58.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.453 13:14:02 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:58.453 13:14:02 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:58.453 13:14:02 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:58.453 13:14:02 -- common/autotest_common.sh@34 -- # set -e 00:29:58.453 13:14:02 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:58.453 13:14:02 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:58.453 13:14:02 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:58.453 13:14:02 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:58.453 13:14:02 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:58.453 13:14:02 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:58.453 13:14:02 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:29:58.453 13:14:02 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:29:58.453 13:14:02 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:29:58.453 13:14:02 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:29:58.453 13:14:02 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:29:58.453 13:14:02 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:29:58.453 13:14:02 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:29:58.453 13:14:02 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:29:58.453 13:14:02 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:29:58.453 13:14:02 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:29:58.453 13:14:02 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:29:58.453 13:14:02 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:29:58.453 13:14:02 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:29:58.453 13:14:02 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:29:58.453 13:14:02 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:29:58.453 13:14:02 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:29:58.453 13:14:02 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:29:58.453 13:14:02 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:29:58.453 13:14:02 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:58.453 13:14:02 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:29:58.453 13:14:02 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:29:58.453 13:14:02 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:29:58.453 13:14:02 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:29:58.453 13:14:02 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:58.453 13:14:02 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:29:58.453 13:14:02 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:29:58.453 13:14:02 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:29:58.453 13:14:02 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:29:58.453 13:14:02 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:29:58.453 13:14:02 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:29:58.453 13:14:02 -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:29:58.453 13:14:02 -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:29:58.453 13:14:02 -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:58.453 13:14:02 -- common/build_config.sh@39 -- # CONFIG_ASAN=y 00:29:58.453 13:14:02 -- common/build_config.sh@40 -- # CONFIG_SHARED=n 00:29:58.453 13:14:02 -- common/build_config.sh@41 -- # CONFIG_VTUNE_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@42 -- # CONFIG_RDMA_SET_TOS=y 00:29:58.453 13:14:02 -- common/build_config.sh@43 -- # CONFIG_VBDEV_COMPRESS=n 00:29:58.453 13:14:02 -- common/build_config.sh@44 -- # CONFIG_VFIO_USER_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@45 -- # CONFIG_PGO_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@46 -- # CONFIG_FUZZER_LIB= 00:29:58.453 13:14:02 -- common/build_config.sh@47 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:58.453 13:14:02 -- common/build_config.sh@48 -- # CONFIG_USDT=n 00:29:58.453 13:14:02 -- common/build_config.sh@49 -- # CONFIG_HAVE_KEYUTILS=y 00:29:58.453 13:14:02 -- common/build_config.sh@50 -- # CONFIG_URING_ZNS=n 00:29:58.453 13:14:02 -- common/build_config.sh@51 -- # CONFIG_FC_PATH= 00:29:58.453 13:14:02 -- common/build_config.sh@52 -- # CONFIG_COVERAGE=y 00:29:58.453 13:14:02 -- common/build_config.sh@53 -- # CONFIG_CUSTOMOCF=n 00:29:58.453 13:14:02 -- common/build_config.sh@54 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:58.453 13:14:02 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:29:58.453 13:14:02 -- common/build_config.sh@56 -- # CONFIG_DEBUG=y 00:29:58.453 13:14:02 -- common/build_config.sh@57 -- # CONFIG_RDMA=y 00:29:58.453 13:14:02 -- common/build_config.sh@58 -- # CONFIG_HAVE_ARC4RANDOM=n 00:29:58.453 13:14:02 -- common/build_config.sh@59 -- # CONFIG_FUZZER=n 00:29:58.453 13:14:02 -- common/build_config.sh@60 -- # CONFIG_FC=n 00:29:58.453 13:14:02 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:29:58.453 13:14:02 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:58.453 13:14:02 -- common/build_config.sh@63 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:58.453 13:14:02 -- common/build_config.sh@64 -- # CONFIG_CROSS_PREFIX= 00:29:58.453 13:14:02 -- common/build_config.sh@65 -- # CONFIG_PREFIX=/usr/local 00:29:58.453 13:14:02 -- common/build_config.sh@66 -- # CONFIG_HAVE_LIBBSD=n 00:29:58.453 13:14:02 -- common/build_config.sh@67 -- # CONFIG_UBSAN=y 00:29:58.453 13:14:02 -- common/build_config.sh@68 -- # CONFIG_PGO_CAPTURE=n 00:29:58.453 13:14:02 -- common/build_config.sh@69 -- # CONFIG_UBLK=n 00:29:58.453 13:14:02 -- common/build_config.sh@70 -- # CONFIG_ISAL_CRYPTO=y 00:29:58.453 13:14:02 -- common/build_config.sh@71 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:58.453 13:14:02 -- common/build_config.sh@72 -- # CONFIG_CRYPTO=n 00:29:58.453 13:14:02 -- common/build_config.sh@73 -- # CONFIG_RBD=n 00:29:58.453 13:14:02 -- common/build_config.sh@74 -- # CONFIG_LIBDIR= 00:29:58.453 13:14:02 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB_DIR= 00:29:58.453 13:14:02 -- common/build_config.sh@76 -- # CONFIG_PGO_USE=n 00:29:58.453 13:14:02 -- common/build_config.sh@77 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:58.453 13:14:02 -- common/build_config.sh@78 -- # CONFIG_GOLANG=n 00:29:58.453 13:14:02 -- common/build_config.sh@79 -- # CONFIG_VHOST=y 00:29:58.453 13:14:02 -- common/build_config.sh@80 -- # CONFIG_IDXD=y 00:29:58.453 13:14:02 -- common/build_config.sh@81 -- # CONFIG_AVAHI=n 00:29:58.453 13:14:02 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:29:58.453 13:14:02 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:58.454 13:14:02 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:58.454 13:14:02 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:58.454 13:14:02 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:58.454 13:14:02 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:58.454 13:14:02 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:58.454 13:14:02 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:58.454 13:14:02 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:58.454 13:14:02 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:58.454 13:14:02 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:58.454 13:14:02 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:58.454 13:14:02 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:58.454 13:14:02 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:58.454 13:14:02 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:58.454 13:14:02 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:58.454 13:14:02 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:58.454 #define SPDK_CONFIG_H 00:29:58.454 #define SPDK_CONFIG_APPS 1 00:29:58.454 #define SPDK_CONFIG_ARCH native 00:29:58.454 #define SPDK_CONFIG_ASAN 1 00:29:58.454 #undef SPDK_CONFIG_AVAHI 00:29:58.454 #undef SPDK_CONFIG_CET 00:29:58.454 #define SPDK_CONFIG_COVERAGE 1 00:29:58.454 #define SPDK_CONFIG_CROSS_PREFIX 00:29:58.454 #undef SPDK_CONFIG_CRYPTO 00:29:58.454 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:58.454 #undef SPDK_CONFIG_CUSTOMOCF 00:29:58.454 #undef SPDK_CONFIG_DAOS 00:29:58.454 #define SPDK_CONFIG_DAOS_DIR 00:29:58.454 #define SPDK_CONFIG_DEBUG 1 00:29:58.454 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:58.454 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:58.454 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:58.454 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:58.454 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:58.454 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:58.454 #define SPDK_CONFIG_EXAMPLES 1 00:29:58.454 #undef SPDK_CONFIG_FC 00:29:58.454 #define SPDK_CONFIG_FC_PATH 00:29:58.454 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:58.454 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:58.454 #undef SPDK_CONFIG_FUSE 00:29:58.454 #undef SPDK_CONFIG_FUZZER 00:29:58.454 #define SPDK_CONFIG_FUZZER_LIB 00:29:58.454 #undef SPDK_CONFIG_GOLANG 00:29:58.454 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:29:58.454 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:29:58.454 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:58.454 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:58.454 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:58.454 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:58.454 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:58.454 #define SPDK_CONFIG_IDXD 1 00:29:58.454 #undef SPDK_CONFIG_IDXD_KERNEL 00:29:58.454 #undef SPDK_CONFIG_IPSEC_MB 00:29:58.454 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:58.454 #define SPDK_CONFIG_ISAL 1 00:29:58.454 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:58.454 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:58.454 #define SPDK_CONFIG_LIBDIR 00:29:58.454 #undef SPDK_CONFIG_LTO 00:29:58.454 #define SPDK_CONFIG_MAX_LCORES 00:29:58.454 #define SPDK_CONFIG_NVME_CUSE 1 00:29:58.454 #undef SPDK_CONFIG_OCF 00:29:58.454 #define SPDK_CONFIG_OCF_PATH 00:29:58.454 #define SPDK_CONFIG_OPENSSL_PATH 00:29:58.454 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:58.454 #define SPDK_CONFIG_PGO_DIR 00:29:58.454 #undef SPDK_CONFIG_PGO_USE 00:29:58.454 #define SPDK_CONFIG_PREFIX /usr/local 00:29:58.454 #define SPDK_CONFIG_RAID5F 1 00:29:58.454 #undef SPDK_CONFIG_RBD 00:29:58.454 #define SPDK_CONFIG_RDMA 1 00:29:58.454 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:58.454 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:58.454 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:58.454 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:58.454 #undef SPDK_CONFIG_SHARED 00:29:58.454 #undef SPDK_CONFIG_SMA 00:29:58.454 #define SPDK_CONFIG_TESTS 1 00:29:58.454 #undef SPDK_CONFIG_TSAN 00:29:58.454 #undef SPDK_CONFIG_UBLK 00:29:58.454 #define SPDK_CONFIG_UBSAN 1 00:29:58.454 #define SPDK_CONFIG_UNIT_TESTS 1 00:29:58.454 #undef SPDK_CONFIG_URING 00:29:58.454 #define SPDK_CONFIG_URING_PATH 00:29:58.454 #undef SPDK_CONFIG_URING_ZNS 00:29:58.454 #undef SPDK_CONFIG_USDT 00:29:58.454 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:58.454 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:58.454 #undef SPDK_CONFIG_VFIO_USER 00:29:58.454 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:58.454 #define SPDK_CONFIG_VHOST 1 00:29:58.454 #define SPDK_CONFIG_VIRTIO 1 00:29:58.454 #undef SPDK_CONFIG_VTUNE 00:29:58.454 #define SPDK_CONFIG_VTUNE_DIR 00:29:58.454 #define SPDK_CONFIG_WERROR 1 00:29:58.454 #define SPDK_CONFIG_WPDK_DIR 00:29:58.454 #undef SPDK_CONFIG_XNVME 00:29:58.454 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:58.454 13:14:02 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:58.454 13:14:02 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:58.454 13:14:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.454 13:14:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.454 13:14:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.454 13:14:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:58.454 13:14:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:58.454 13:14:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:58.454 13:14:02 -- paths/export.sh@5 -- # export PATH 00:29:58.454 13:14:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:58.454 13:14:02 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:58.454 13:14:02 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:58.454 13:14:02 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:58.454 13:14:02 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:58.454 13:14:02 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:58.454 13:14:02 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:58.454 13:14:02 -- pm/common@67 -- # TEST_TAG=N/A 00:29:58.454 13:14:02 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:58.454 13:14:02 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:29:58.454 13:14:02 -- pm/common@71 -- # uname -s 00:29:58.454 13:14:02 -- pm/common@71 -- # PM_OS=Linux 00:29:58.454 13:14:02 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:58.454 13:14:02 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:29:58.454 13:14:02 -- pm/common@76 -- # [[ Linux == Linux ]] 00:29:58.454 13:14:02 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:29:58.454 13:14:02 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:29:58.454 13:14:02 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:29:58.454 13:14:02 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:29:58.454 13:14:02 -- common/autotest_common.sh@57 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:29:58.454 13:14:02 -- common/autotest_common.sh@61 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:58.454 13:14:02 -- common/autotest_common.sh@63 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:29:58.454 13:14:02 -- common/autotest_common.sh@65 -- # : 1 00:29:58.454 13:14:02 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:58.454 13:14:02 -- common/autotest_common.sh@67 -- # : 1 00:29:58.454 13:14:02 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:29:58.454 13:14:02 -- common/autotest_common.sh@69 -- # : 00:29:58.454 13:14:02 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:29:58.454 13:14:02 -- common/autotest_common.sh@71 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:29:58.454 13:14:02 -- common/autotest_common.sh@73 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:29:58.454 13:14:02 -- common/autotest_common.sh@75 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:29:58.454 13:14:02 -- common/autotest_common.sh@77 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:58.454 13:14:02 -- common/autotest_common.sh@79 -- # : 1 00:29:58.454 13:14:02 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:29:58.454 13:14:02 -- common/autotest_common.sh@81 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:29:58.454 13:14:02 -- common/autotest_common.sh@83 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:29:58.454 13:14:02 -- common/autotest_common.sh@85 -- # : 0 00:29:58.454 13:14:02 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:29:58.454 13:14:02 -- common/autotest_common.sh@87 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:29:58.455 13:14:02 -- common/autotest_common.sh@89 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:29:58.455 13:14:02 -- common/autotest_common.sh@91 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:29:58.455 13:14:02 -- common/autotest_common.sh@93 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:29:58.455 13:14:02 -- common/autotest_common.sh@95 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:58.455 13:14:02 -- common/autotest_common.sh@97 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:29:58.455 13:14:02 -- common/autotest_common.sh@99 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:29:58.455 13:14:02 -- common/autotest_common.sh@101 -- # : rdma 00:29:58.455 13:14:02 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:58.455 13:14:02 -- common/autotest_common.sh@103 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:29:58.455 13:14:02 -- common/autotest_common.sh@105 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:29:58.455 13:14:02 -- common/autotest_common.sh@107 -- # : 1 00:29:58.455 13:14:02 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:29:58.455 13:14:02 -- common/autotest_common.sh@109 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:29:58.455 13:14:02 -- common/autotest_common.sh@111 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:29:58.455 13:14:02 -- common/autotest_common.sh@113 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:29:58.455 13:14:02 -- common/autotest_common.sh@115 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:29:58.455 13:14:02 -- common/autotest_common.sh@117 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:58.455 13:14:02 -- common/autotest_common.sh@119 -- # : 1 00:29:58.455 13:14:02 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:29:58.455 13:14:02 -- common/autotest_common.sh@121 -- # : 1 00:29:58.455 13:14:02 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:29:58.455 13:14:02 -- common/autotest_common.sh@123 -- # : 00:29:58.455 13:14:02 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:58.455 13:14:02 -- common/autotest_common.sh@125 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:29:58.455 13:14:02 -- common/autotest_common.sh@127 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:29:58.455 13:14:02 -- common/autotest_common.sh@129 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:29:58.455 13:14:02 -- common/autotest_common.sh@131 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:29:58.455 13:14:02 -- common/autotest_common.sh@133 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:29:58.455 13:14:02 -- common/autotest_common.sh@135 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:29:58.455 13:14:02 -- common/autotest_common.sh@137 -- # : 00:29:58.455 13:14:02 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:29:58.455 13:14:02 -- common/autotest_common.sh@139 -- # : true 00:29:58.455 13:14:02 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:29:58.455 13:14:02 -- common/autotest_common.sh@141 -- # : 1 00:29:58.455 13:14:02 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:29:58.455 13:14:02 -- common/autotest_common.sh@143 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:29:58.455 13:14:02 -- common/autotest_common.sh@145 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:29:58.455 13:14:02 -- common/autotest_common.sh@147 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:29:58.455 13:14:02 -- common/autotest_common.sh@149 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:29:58.455 13:14:02 -- common/autotest_common.sh@151 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:29:58.455 13:14:02 -- common/autotest_common.sh@153 -- # : 00:29:58.455 13:14:02 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:29:58.455 13:14:02 -- common/autotest_common.sh@155 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:29:58.455 13:14:02 -- common/autotest_common.sh@157 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:29:58.455 13:14:02 -- common/autotest_common.sh@159 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:29:58.455 13:14:02 -- common/autotest_common.sh@161 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:29:58.455 13:14:02 -- common/autotest_common.sh@163 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:29:58.455 13:14:02 -- common/autotest_common.sh@166 -- # : 00:29:58.455 13:14:02 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:29:58.455 13:14:02 -- common/autotest_common.sh@168 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:29:58.455 13:14:02 -- common/autotest_common.sh@170 -- # : 0 00:29:58.455 13:14:02 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:58.455 13:14:02 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:58.455 13:14:02 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:58.455 13:14:02 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:58.455 13:14:02 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:58.455 13:14:02 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:58.455 13:14:02 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:58.455 13:14:02 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:29:58.455 13:14:02 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:58.455 13:14:02 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:58.455 13:14:02 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:58.455 13:14:02 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:58.455 13:14:02 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:58.455 13:14:02 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:29:58.455 13:14:02 -- common/autotest_common.sh@199 -- # cat 00:29:58.455 13:14:02 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:29:58.455 13:14:02 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:58.455 13:14:02 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:58.455 13:14:02 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:58.455 13:14:02 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:58.455 13:14:02 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:29:58.455 13:14:02 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:29:58.455 13:14:02 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:58.455 13:14:02 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:58.455 13:14:02 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:58.455 13:14:02 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:58.456 13:14:02 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:29:58.456 13:14:02 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:29:58.456 13:14:02 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:58.456 13:14:02 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:29:58.456 13:14:02 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:58.456 13:14:02 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:58.456 13:14:02 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:58.456 13:14:02 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:58.456 13:14:02 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:29:58.456 13:14:02 -- common/autotest_common.sh@252 -- # export valgrind= 00:29:58.456 13:14:02 -- common/autotest_common.sh@252 -- # valgrind= 00:29:58.456 13:14:02 -- common/autotest_common.sh@258 -- # uname -s 00:29:58.456 13:14:02 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:29:58.456 13:14:02 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:29:58.456 13:14:02 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:29:58.456 13:14:02 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:29:58.456 13:14:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@268 -- # MAKE=make 00:29:58.456 13:14:02 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:29:58.456 13:14:02 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:29:58.456 13:14:02 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:29:58.456 13:14:02 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:29:58.456 13:14:02 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:29:58.456 13:14:02 -- common/autotest_common.sh@307 -- # [[ -z 142391 ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@307 -- # kill -0 142391 00:29:58.456 13:14:02 -- common/autotest_common.sh@1654 -- # set_test_storage 2147483648 00:29:58.456 13:14:02 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:29:58.456 13:14:02 -- common/autotest_common.sh@320 -- # local mount target_dir 00:29:58.456 13:14:02 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:29:58.456 13:14:02 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:29:58.456 13:14:02 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:29:58.456 13:14:02 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:29:58.456 13:14:02 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.jRQn0H 00:29:58.456 13:14:02 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:58.456 13:14:02 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:29:58.456 13:14:02 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.jRQn0H/tests/interrupt /tmp/spdk.jRQn0H 00:29:58.456 13:14:02 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:29:58.456 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.456 13:14:02 -- common/autotest_common.sh@316 -- # df -T 00:29:58.456 13:14:02 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:29:58.715 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=6224465920 00:29:58.715 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6224465920 00:29:58.715 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:58.715 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:58.715 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=1249763328 00:29:58.715 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254514688 00:29:58.715 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=4751360 00:29:58.715 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:29:58.715 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=10598092800 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=10001924096 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=6269952000 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=6272565248 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6272565248 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=103089152 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=41025536 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=41025536 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=1254510592 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1254510592 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest_3/ubuntu2004-libvirt/output 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=93203021824 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=6499758080 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop3 00:29:58.716 13:14:02 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:29:58.716 13:14:02 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:29:58.716 13:14:02 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:29:58.716 13:14:02 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:29:58.716 13:14:02 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:29:58.716 * Looking for test storage... 00:29:58.716 13:14:02 -- common/autotest_common.sh@357 -- # local target_space new_size 00:29:58.716 13:14:02 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:29:58.716 13:14:02 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.716 13:14:02 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:58.716 13:14:02 -- common/autotest_common.sh@361 -- # mount=/ 00:29:58.716 13:14:02 -- common/autotest_common.sh@363 -- # target_space=10598092800 00:29:58.716 13:14:02 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:29:58.716 13:14:02 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:29:58.716 13:14:02 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:29:58.716 13:14:02 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:29:58.716 13:14:02 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:29:58.716 13:14:02 -- common/autotest_common.sh@370 -- # new_size=12216516608 00:29:58.716 13:14:02 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:58.716 13:14:02 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.716 13:14:02 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.716 13:14:02 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:29:58.716 13:14:02 -- common/autotest_common.sh@378 -- # return 0 00:29:58.716 13:14:02 -- common/autotest_common.sh@1656 -- # set -o errtrace 00:29:58.716 13:14:02 -- common/autotest_common.sh@1657 -- # shopt -s extdebug 00:29:58.716 13:14:02 -- common/autotest_common.sh@1658 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:58.716 13:14:02 -- common/autotest_common.sh@1660 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:58.716 13:14:02 -- common/autotest_common.sh@1661 -- # true 00:29:58.716 13:14:02 -- common/autotest_common.sh@1663 -- # xtrace_fd 00:29:58.716 13:14:02 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:58.716 13:14:02 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:58.716 13:14:02 -- common/autotest_common.sh@27 -- # exec 00:29:58.716 13:14:02 -- common/autotest_common.sh@29 -- # exec 00:29:58.716 13:14:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:58.716 13:14:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:58.716 13:14:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:58.716 13:14:02 -- common/autotest_common.sh@18 -- # set -x 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:58.716 13:14:02 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:58.716 13:14:02 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:29:58.716 13:14:02 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142435 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:58.716 13:14:02 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142435 /var/tmp/spdk.sock 00:29:58.716 13:14:02 -- common/autotest_common.sh@817 -- # '[' -z 142435 ']' 00:29:58.716 13:14:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.716 13:14:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:58.716 13:14:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.716 13:14:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:58.716 13:14:02 -- common/autotest_common.sh@10 -- # set +x 00:29:58.716 [2024-04-17 13:14:02.682414] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:29:58.716 [2024-04-17 13:14:02.682644] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142435 ] 00:29:58.716 [2024-04-17 13:14:02.859930] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.975 [2024-04-17 13:14:03.071509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.976 [2024-04-17 13:14:03.071569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.976 [2024-04-17 13:14:03.071561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.234 [2024-04-17 13:14:03.370111] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.802 13:14:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:59.802 13:14:03 -- common/autotest_common.sh@850 -- # return 0 00:29:59.802 13:14:03 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:29:59.802 13:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.802 13:14:03 -- common/autotest_common.sh@10 -- # set +x 00:29:59.802 13:14:03 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:29:59.802 13:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.802 13:14:03 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:29:59.802 "name": "app_thread", 00:29:59.802 "id": 1, 00:29:59.802 "active_pollers": [], 00:29:59.802 "timed_pollers": [ 00:29:59.802 { 00:29:59.802 "name": "rpc_subsystem_poll_servers", 00:29:59.802 "id": 1, 00:29:59.802 "state": "waiting", 00:29:59.802 "run_count": 0, 00:29:59.802 "busy_count": 0, 00:29:59.802 "period_ticks": 8800000 00:29:59.802 } 00:29:59.802 ], 00:29:59.802 "paused_pollers": [] 00:29:59.802 }' 00:29:59.802 13:14:03 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:29:59.803 13:14:03 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:29:59.803 13:14:03 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:29:59.803 13:14:03 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:29:59.803 13:14:03 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:29:59.803 13:14:03 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:29:59.803 13:14:03 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:29:59.803 13:14:03 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:59.803 13:14:03 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:59.803 5000+0 records in 00:29:59.803 5000+0 records out 00:29:59.803 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0242364 s, 423 MB/s 00:29:59.803 13:14:03 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:30:00.062 AIO0 00:30:00.062 13:14:04 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:00.336 13:14:04 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:30:00.599 13:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:30:00.599 13:14:04 -- common/autotest_common.sh@10 -- # set +x 00:30:00.599 13:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:30:00.599 "name": "app_thread", 00:30:00.599 "id": 1, 00:30:00.599 "active_pollers": [], 00:30:00.599 "timed_pollers": [ 00:30:00.599 { 00:30:00.599 "name": "rpc_subsystem_poll_servers", 00:30:00.599 "id": 1, 00:30:00.599 "state": "waiting", 00:30:00.599 "run_count": 0, 00:30:00.599 "busy_count": 0, 00:30:00.599 "period_ticks": 8800000 00:30:00.599 } 00:30:00.599 ], 00:30:00.599 "paused_pollers": [] 00:30:00.599 }' 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:30:00.599 13:14:04 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 142435 00:30:00.599 13:14:04 -- common/autotest_common.sh@924 -- # '[' -z 142435 ']' 00:30:00.599 13:14:04 -- common/autotest_common.sh@928 -- # kill -0 142435 00:30:00.599 13:14:04 -- common/autotest_common.sh@929 -- # uname 00:30:00.599 13:14:04 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:00.599 13:14:04 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142435 00:30:00.599 13:14:04 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:00.599 killing process with pid 142435 00:30:00.599 13:14:04 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:00.599 13:14:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142435' 00:30:00.599 13:14:04 -- common/autotest_common.sh@943 -- # kill 142435 00:30:00.599 13:14:04 -- common/autotest_common.sh@948 -- # wait 142435 00:30:01.992 13:14:05 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:30:01.992 13:14:05 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:30:01.992 00:30:01.992 real 0m3.470s 00:30:01.992 user 0m2.984s 00:30:01.992 sys 0m0.491s 00:30:01.992 13:14:05 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:01.992 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:30:01.992 ************************************ 00:30:01.992 END TEST reap_unregistered_poller 00:30:01.992 ************************************ 00:30:01.992 13:14:05 -- spdk/autotest.sh@193 -- # uname -s 00:30:01.992 13:14:05 -- spdk/autotest.sh@193 -- # [[ Linux == Linux ]] 00:30:01.992 13:14:05 -- spdk/autotest.sh@194 -- # [[ 1 -eq 1 ]] 00:30:01.992 13:14:05 -- spdk/autotest.sh@200 -- # [[ 0 -eq 0 ]] 00:30:01.992 13:14:05 -- spdk/autotest.sh@201 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:01.992 13:14:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:01.992 13:14:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:01.992 13:14:05 -- common/autotest_common.sh@10 -- # set +x 00:30:01.992 ************************************ 00:30:01.992 START TEST spdk_dd 00:30:01.992 ************************************ 00:30:01.992 13:14:05 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:01.992 * Looking for test storage... 00:30:01.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:01.992 13:14:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:01.992 13:14:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.992 13:14:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.992 13:14:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.993 13:14:06 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:01.993 13:14:06 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:01.993 13:14:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:01.993 13:14:06 -- paths/export.sh@5 -- # export PATH 00:30:01.993 13:14:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:01.993 13:14:06 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:02.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:02.252 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:03.189 13:14:07 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:30:03.189 13:14:07 -- dd/dd.sh@11 -- # nvme_in_userspace 00:30:03.189 13:14:07 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:03.189 13:14:07 -- scripts/common.sh@310 -- # local nvmes 00:30:03.189 13:14:07 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:03.189 13:14:07 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:03.189 13:14:07 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:03.448 13:14:07 -- scripts/common.sh@295 -- # local bdf= 00:30:03.448 13:14:07 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:03.448 13:14:07 -- scripts/common.sh@230 -- # local class 00:30:03.448 13:14:07 -- scripts/common.sh@231 -- # local subclass 00:30:03.448 13:14:07 -- scripts/common.sh@232 -- # local progif 00:30:03.448 13:14:07 -- scripts/common.sh@233 -- # printf %02x 1 00:30:03.448 13:14:07 -- scripts/common.sh@233 -- # class=01 00:30:03.448 13:14:07 -- scripts/common.sh@234 -- # printf %02x 8 00:30:03.448 13:14:07 -- scripts/common.sh@234 -- # subclass=08 00:30:03.448 13:14:07 -- scripts/common.sh@235 -- # printf %02x 2 00:30:03.448 13:14:07 -- scripts/common.sh@235 -- # progif=02 00:30:03.448 13:14:07 -- scripts/common.sh@237 -- # hash lspci 00:30:03.448 13:14:07 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:03.448 13:14:07 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:03.448 13:14:07 -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:03.448 13:14:07 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:03.448 13:14:07 -- scripts/common.sh@242 -- # tr -d '"' 00:30:03.448 13:14:07 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:03.448 13:14:07 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:03.448 13:14:07 -- scripts/common.sh@15 -- # local i 00:30:03.448 13:14:07 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:03.448 13:14:07 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:03.448 13:14:07 -- scripts/common.sh@24 -- # return 0 00:30:03.448 13:14:07 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:03.448 13:14:07 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:03.448 13:14:07 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:03.448 13:14:07 -- scripts/common.sh@320 -- # uname -s 00:30:03.448 13:14:07 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:03.448 13:14:07 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:03.448 13:14:07 -- scripts/common.sh@325 -- # (( 1 )) 00:30:03.448 13:14:07 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:30:03.448 13:14:07 -- dd/dd.sh@13 -- # check_liburing 00:30:03.448 13:14:07 -- dd/common.sh@139 -- # local lib so 00:30:03.448 13:14:07 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:30:03.448 13:14:07 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:30:03.448 13:14:07 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:03.448 13:14:07 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:30:03.448 13:14:07 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:30:03.448 13:14:07 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:03.448 13:14:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:03.448 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.448 ************************************ 00:30:03.448 START TEST spdk_dd_basic_rw 00:30:03.448 ************************************ 00:30:03.448 13:14:07 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:30:03.448 * Looking for test storage... 00:30:03.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:03.448 13:14:07 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.448 13:14:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.448 13:14:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.448 13:14:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.448 13:14:07 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:03.448 13:14:07 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:03.448 13:14:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:03.448 13:14:07 -- paths/export.sh@5 -- # export PATH 00:30:03.449 13:14:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:03.449 13:14:07 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:30:03.449 13:14:07 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:30:03.449 13:14:07 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:30:03.449 13:14:07 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:30:03.449 13:14:07 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:30:03.449 13:14:07 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:30:03.449 13:14:07 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:30:03.449 13:14:07 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:03.449 13:14:07 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:03.449 13:14:07 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:30:03.449 13:14:07 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:30:03.449 13:14:07 -- dd/common.sh@126 -- # mapfile -t id 00:30:03.449 13:14:07 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:30:03.711 13:14:07 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2525 Host Write Commands: 114 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:30:03.711 13:14:07 -- dd/common.sh@130 -- # lbaf=04 00:30:03.712 13:14:07 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2525 Host Write Commands: 114 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:30:03.712 13:14:07 -- dd/common.sh@132 -- # lbaf=4096 00:30:03.712 13:14:07 -- dd/common.sh@134 -- # echo 4096 00:30:03.712 13:14:07 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:30:03.712 13:14:07 -- dd/basic_rw.sh@96 -- # : 00:30:03.712 13:14:07 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:03.712 13:14:07 -- dd/basic_rw.sh@96 -- # gen_conf 00:30:03.712 13:14:07 -- dd/common.sh@31 -- # xtrace_disable 00:30:03.712 13:14:07 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:30:03.712 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.712 13:14:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:03.712 13:14:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.712 ************************************ 00:30:03.712 START TEST dd_bs_lt_native_bs 00:30:03.712 ************************************ 00:30:03.712 13:14:07 -- common/autotest_common.sh@1099 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:03.712 13:14:07 -- common/autotest_common.sh@638 -- # local es=0 00:30:03.712 13:14:07 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:03.712 13:14:07 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:03.712 13:14:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:03.712 13:14:07 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:03.712 13:14:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:03.712 13:14:07 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:03.712 13:14:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:03.712 13:14:07 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:03.712 13:14:07 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:03.712 13:14:07 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:03.712 { 00:30:03.712 "subsystems": [ 00:30:03.712 { 00:30:03.712 "subsystem": "bdev", 00:30:03.712 "config": [ 00:30:03.712 { 00:30:03.712 "params": { 00:30:03.712 "trtype": "pcie", 00:30:03.712 "traddr": "0000:00:10.0", 00:30:03.712 "name": "Nvme0" 00:30:03.712 }, 00:30:03.712 "method": "bdev_nvme_attach_controller" 00:30:03.712 }, 00:30:03.712 { 00:30:03.712 "method": "bdev_wait_for_examine" 00:30:03.712 } 00:30:03.712 ] 00:30:03.712 } 00:30:03.712 ] 00:30:03.712 } 00:30:03.974 [2024-04-17 13:14:07.895940] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:03.974 [2024-04-17 13:14:07.896121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142760 ] 00:30:03.974 [2024-04-17 13:14:08.060402] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.237 [2024-04-17 13:14:08.293631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.818 [2024-04-17 13:14:08.667698] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:30:04.818 [2024-04-17 13:14:08.667966] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:05.409 [2024-04-17 13:14:09.396801] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:30:05.673 13:14:09 -- common/autotest_common.sh@641 -- # es=234 00:30:05.673 13:14:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:05.673 13:14:09 -- common/autotest_common.sh@650 -- # es=106 00:30:05.673 13:14:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:05.673 13:14:09 -- common/autotest_common.sh@658 -- # es=1 00:30:05.673 13:14:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:05.673 ************************************ 00:30:05.673 00:30:05.673 real 0m1.973s 00:30:05.674 user 0m1.705s 00:30:05.674 sys 0m0.232s 00:30:05.674 13:14:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:05.674 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:30:05.674 END TEST dd_bs_lt_native_bs 00:30:05.674 ************************************ 00:30:05.932 13:14:09 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:30:05.932 13:14:09 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:30:05.932 13:14:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:05.932 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:30:05.932 ************************************ 00:30:05.932 START TEST dd_rw 00:30:05.932 ************************************ 00:30:05.932 13:14:09 -- common/autotest_common.sh@1099 -- # basic_rw 4096 00:30:05.932 13:14:09 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:30:05.932 13:14:09 -- dd/basic_rw.sh@12 -- # local count size 00:30:05.932 13:14:09 -- dd/basic_rw.sh@13 -- # local qds bss 00:30:05.932 13:14:09 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:30:05.932 13:14:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:05.932 13:14:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:05.932 13:14:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:05.932 13:14:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:05.932 13:14:09 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:05.932 13:14:09 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:05.932 13:14:09 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:05.932 13:14:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:05.933 13:14:09 -- dd/basic_rw.sh@23 -- # count=15 00:30:05.933 13:14:09 -- dd/basic_rw.sh@24 -- # count=15 00:30:05.933 13:14:09 -- dd/basic_rw.sh@25 -- # size=61440 00:30:05.933 13:14:09 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:05.933 13:14:09 -- dd/common.sh@98 -- # xtrace_disable 00:30:05.933 13:14:09 -- common/autotest_common.sh@10 -- # set +x 00:30:06.500 13:14:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:30:06.500 13:14:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:06.500 13:14:10 -- dd/common.sh@31 -- # xtrace_disable 00:30:06.500 13:14:10 -- common/autotest_common.sh@10 -- # set +x 00:30:06.500 { 00:30:06.500 "subsystems": [ 00:30:06.500 { 00:30:06.500 "subsystem": "bdev", 00:30:06.500 "config": [ 00:30:06.500 { 00:30:06.500 "params": { 00:30:06.500 "trtype": "pcie", 00:30:06.500 "traddr": "0000:00:10.0", 00:30:06.500 "name": "Nvme0" 00:30:06.500 }, 00:30:06.500 "method": "bdev_nvme_attach_controller" 00:30:06.500 }, 00:30:06.500 { 00:30:06.500 "method": "bdev_wait_for_examine" 00:30:06.500 } 00:30:06.500 ] 00:30:06.500 } 00:30:06.501 ] 00:30:06.501 } 00:30:06.501 [2024-04-17 13:14:10.564174] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:06.501 [2024-04-17 13:14:10.564395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142826 ] 00:30:06.760 [2024-04-17 13:14:10.740070] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.019 [2024-04-17 13:14:10.957495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.652  Copying: 60/60 [kB] (average 19 MBps) 00:30:08.652 00:30:08.652 13:14:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:30:08.652 13:14:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:08.652 13:14:12 -- dd/common.sh@31 -- # xtrace_disable 00:30:08.652 13:14:12 -- common/autotest_common.sh@10 -- # set +x 00:30:08.652 [2024-04-17 13:14:12.509257] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:08.652 [2024-04-17 13:14:12.510030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142864 ] 00:30:08.652 { 00:30:08.652 "subsystems": [ 00:30:08.652 { 00:30:08.652 "subsystem": "bdev", 00:30:08.652 "config": [ 00:30:08.652 { 00:30:08.652 "params": { 00:30:08.652 "trtype": "pcie", 00:30:08.652 "traddr": "0000:00:10.0", 00:30:08.652 "name": "Nvme0" 00:30:08.652 }, 00:30:08.652 "method": "bdev_nvme_attach_controller" 00:30:08.652 }, 00:30:08.652 { 00:30:08.652 "method": "bdev_wait_for_examine" 00:30:08.652 } 00:30:08.652 ] 00:30:08.652 } 00:30:08.652 ] 00:30:08.652 } 00:30:08.652 [2024-04-17 13:14:12.672980] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.910 [2024-04-17 13:14:12.902191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.589  Copying: 60/60 [kB] (average 29 MBps) 00:30:10.589 00:30:10.589 13:14:14 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:10.589 13:14:14 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:10.589 13:14:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:10.590 13:14:14 -- dd/common.sh@11 -- # local nvme_ref= 00:30:10.590 13:14:14 -- dd/common.sh@12 -- # local size=61440 00:30:10.590 13:14:14 -- dd/common.sh@14 -- # local bs=1048576 00:30:10.590 13:14:14 -- dd/common.sh@15 -- # local count=1 00:30:10.590 13:14:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:10.590 13:14:14 -- dd/common.sh@18 -- # gen_conf 00:30:10.590 13:14:14 -- dd/common.sh@31 -- # xtrace_disable 00:30:10.590 13:14:14 -- common/autotest_common.sh@10 -- # set +x 00:30:10.590 [2024-04-17 13:14:14.556177] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:10.590 [2024-04-17 13:14:14.556472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142902 ] 00:30:10.590 { 00:30:10.590 "subsystems": [ 00:30:10.590 { 00:30:10.590 "subsystem": "bdev", 00:30:10.590 "config": [ 00:30:10.590 { 00:30:10.590 "params": { 00:30:10.590 "trtype": "pcie", 00:30:10.590 "traddr": "0000:00:10.0", 00:30:10.590 "name": "Nvme0" 00:30:10.590 }, 00:30:10.590 "method": "bdev_nvme_attach_controller" 00:30:10.590 }, 00:30:10.590 { 00:30:10.590 "method": "bdev_wait_for_examine" 00:30:10.590 } 00:30:10.590 ] 00:30:10.590 } 00:30:10.590 ] 00:30:10.590 } 00:30:10.590 [2024-04-17 13:14:14.718628] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.848 [2024-04-17 13:14:14.926103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.352  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:12.352 00:30:12.352 13:14:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:12.352 13:14:16 -- dd/basic_rw.sh@23 -- # count=15 00:30:12.352 13:14:16 -- dd/basic_rw.sh@24 -- # count=15 00:30:12.352 13:14:16 -- dd/basic_rw.sh@25 -- # size=61440 00:30:12.352 13:14:16 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:12.352 13:14:16 -- dd/common.sh@98 -- # xtrace_disable 00:30:12.352 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.920 13:14:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:30:12.920 13:14:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:12.920 13:14:16 -- dd/common.sh@31 -- # xtrace_disable 00:30:12.920 13:14:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.920 [2024-04-17 13:14:17.025445] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:12.920 [2024-04-17 13:14:17.025639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142933 ] 00:30:12.920 { 00:30:12.920 "subsystems": [ 00:30:12.920 { 00:30:12.920 "subsystem": "bdev", 00:30:12.920 "config": [ 00:30:12.920 { 00:30:12.920 "params": { 00:30:12.920 "trtype": "pcie", 00:30:12.920 "traddr": "0000:00:10.0", 00:30:12.920 "name": "Nvme0" 00:30:12.920 }, 00:30:12.920 "method": "bdev_nvme_attach_controller" 00:30:12.920 }, 00:30:12.920 { 00:30:12.920 "method": "bdev_wait_for_examine" 00:30:12.920 } 00:30:12.920 ] 00:30:12.920 } 00:30:12.920 ] 00:30:12.920 } 00:30:13.178 [2024-04-17 13:14:17.183959] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.437 [2024-04-17 13:14:17.399667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.074  Copying: 60/60 [kB] (average 58 MBps) 00:30:15.074 00:30:15.075 13:14:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:30:15.075 13:14:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:15.075 13:14:18 -- dd/common.sh@31 -- # xtrace_disable 00:30:15.075 13:14:18 -- common/autotest_common.sh@10 -- # set +x 00:30:15.075 [2024-04-17 13:14:18.951837] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:15.075 [2024-04-17 13:14:18.952341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142961 ] 00:30:15.075 { 00:30:15.075 "subsystems": [ 00:30:15.075 { 00:30:15.075 "subsystem": "bdev", 00:30:15.075 "config": [ 00:30:15.075 { 00:30:15.075 "params": { 00:30:15.075 "trtype": "pcie", 00:30:15.075 "traddr": "0000:00:10.0", 00:30:15.075 "name": "Nvme0" 00:30:15.075 }, 00:30:15.075 "method": "bdev_nvme_attach_controller" 00:30:15.075 }, 00:30:15.075 { 00:30:15.075 "method": "bdev_wait_for_examine" 00:30:15.075 } 00:30:15.075 ] 00:30:15.075 } 00:30:15.075 ] 00:30:15.075 } 00:30:15.075 [2024-04-17 13:14:19.115371] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.334 [2024-04-17 13:14:19.383547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.861  Copying: 60/60 [kB] (average 58 MBps) 00:30:16.861 00:30:16.861 13:14:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:16.861 13:14:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:16.861 13:14:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:16.861 13:14:20 -- dd/common.sh@11 -- # local nvme_ref= 00:30:16.861 13:14:20 -- dd/common.sh@12 -- # local size=61440 00:30:16.861 13:14:20 -- dd/common.sh@14 -- # local bs=1048576 00:30:16.861 13:14:20 -- dd/common.sh@15 -- # local count=1 00:30:16.861 13:14:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:16.861 13:14:20 -- dd/common.sh@18 -- # gen_conf 00:30:16.861 13:14:20 -- dd/common.sh@31 -- # xtrace_disable 00:30:16.861 13:14:20 -- common/autotest_common.sh@10 -- # set +x 00:30:16.861 [2024-04-17 13:14:20.928696] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:16.861 [2024-04-17 13:14:20.929153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142993 ] 00:30:16.861 { 00:30:16.861 "subsystems": [ 00:30:16.861 { 00:30:16.861 "subsystem": "bdev", 00:30:16.861 "config": [ 00:30:16.861 { 00:30:16.861 "params": { 00:30:16.861 "trtype": "pcie", 00:30:16.861 "traddr": "0000:00:10.0", 00:30:16.861 "name": "Nvme0" 00:30:16.861 }, 00:30:16.861 "method": "bdev_nvme_attach_controller" 00:30:16.861 }, 00:30:16.861 { 00:30:16.861 "method": "bdev_wait_for_examine" 00:30:16.861 } 00:30:16.861 ] 00:30:16.861 } 00:30:16.861 ] 00:30:16.861 } 00:30:17.154 [2024-04-17 13:14:21.089311] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.154 [2024-04-17 13:14:21.291696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.657  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:18.657 00:30:18.916 13:14:22 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:18.916 13:14:22 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:18.916 13:14:22 -- dd/basic_rw.sh@23 -- # count=7 00:30:18.916 13:14:22 -- dd/basic_rw.sh@24 -- # count=7 00:30:18.916 13:14:22 -- dd/basic_rw.sh@25 -- # size=57344 00:30:18.916 13:14:22 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:18.916 13:14:22 -- dd/common.sh@98 -- # xtrace_disable 00:30:18.916 13:14:22 -- common/autotest_common.sh@10 -- # set +x 00:30:19.482 13:14:23 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:30:19.482 13:14:23 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:19.482 13:14:23 -- dd/common.sh@31 -- # xtrace_disable 00:30:19.482 13:14:23 -- common/autotest_common.sh@10 -- # set +x 00:30:19.482 [2024-04-17 13:14:23.498492] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:19.482 [2024-04-17 13:14:23.498836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143040 ] 00:30:19.482 { 00:30:19.482 "subsystems": [ 00:30:19.482 { 00:30:19.482 "subsystem": "bdev", 00:30:19.482 "config": [ 00:30:19.482 { 00:30:19.482 "params": { 00:30:19.482 "trtype": "pcie", 00:30:19.482 "traddr": "0000:00:10.0", 00:30:19.482 "name": "Nvme0" 00:30:19.482 }, 00:30:19.482 "method": "bdev_nvme_attach_controller" 00:30:19.482 }, 00:30:19.482 { 00:30:19.482 "method": "bdev_wait_for_examine" 00:30:19.482 } 00:30:19.482 ] 00:30:19.482 } 00:30:19.482 ] 00:30:19.482 } 00:30:19.740 [2024-04-17 13:14:23.657266] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.740 [2024-04-17 13:14:23.868810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.278  Copying: 56/56 [kB] (average 54 MBps) 00:30:21.278 00:30:21.278 13:14:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:21.278 13:14:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:30:21.278 13:14:25 -- dd/common.sh@31 -- # xtrace_disable 00:30:21.278 13:14:25 -- common/autotest_common.sh@10 -- # set +x 00:30:21.278 [2024-04-17 13:14:25.393069] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:21.278 [2024-04-17 13:14:25.393449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143072 ] 00:30:21.278 { 00:30:21.278 "subsystems": [ 00:30:21.278 { 00:30:21.278 "subsystem": "bdev", 00:30:21.278 "config": [ 00:30:21.278 { 00:30:21.278 "params": { 00:30:21.278 "trtype": "pcie", 00:30:21.278 "traddr": "0000:00:10.0", 00:30:21.278 "name": "Nvme0" 00:30:21.278 }, 00:30:21.278 "method": "bdev_nvme_attach_controller" 00:30:21.278 }, 00:30:21.278 { 00:30:21.278 "method": "bdev_wait_for_examine" 00:30:21.278 } 00:30:21.278 ] 00:30:21.278 } 00:30:21.278 ] 00:30:21.278 } 00:30:21.537 [2024-04-17 13:14:25.556212] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.796 [2024-04-17 13:14:25.792809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.433  Copying: 56/56 [kB] (average 54 MBps) 00:30:23.433 00:30:23.433 13:14:27 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:23.433 13:14:27 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:23.433 13:14:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:23.433 13:14:27 -- dd/common.sh@11 -- # local nvme_ref= 00:30:23.433 13:14:27 -- dd/common.sh@12 -- # local size=57344 00:30:23.433 13:14:27 -- dd/common.sh@14 -- # local bs=1048576 00:30:23.433 13:14:27 -- dd/common.sh@15 -- # local count=1 00:30:23.433 13:14:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:23.433 13:14:27 -- dd/common.sh@18 -- # gen_conf 00:30:23.433 13:14:27 -- dd/common.sh@31 -- # xtrace_disable 00:30:23.433 13:14:27 -- common/autotest_common.sh@10 -- # set +x 00:30:23.433 { 00:30:23.433 "subsystems": [ 00:30:23.433 { 00:30:23.433 "subsystem": "bdev", 00:30:23.433 "config": [ 00:30:23.433 { 00:30:23.433 "params": { 00:30:23.433 "trtype": "pcie", 00:30:23.433 "traddr": "0000:00:10.0", 00:30:23.433 "name": "Nvme0" 00:30:23.433 }, 00:30:23.433 "method": "bdev_nvme_attach_controller" 00:30:23.433 }, 00:30:23.433 { 00:30:23.433 "method": "bdev_wait_for_examine" 00:30:23.433 } 00:30:23.433 ] 00:30:23.433 } 00:30:23.433 ] 00:30:23.433 } 00:30:23.433 [2024-04-17 13:14:27.452479] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:23.433 [2024-04-17 13:14:27.452907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143100 ] 00:30:23.692 [2024-04-17 13:14:27.628032] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.951 [2024-04-17 13:14:27.840700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.176  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:25.176 00:30:25.176 13:14:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:25.176 13:14:29 -- dd/basic_rw.sh@23 -- # count=7 00:30:25.176 13:14:29 -- dd/basic_rw.sh@24 -- # count=7 00:30:25.176 13:14:29 -- dd/basic_rw.sh@25 -- # size=57344 00:30:25.176 13:14:29 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:25.176 13:14:29 -- dd/common.sh@98 -- # xtrace_disable 00:30:25.176 13:14:29 -- common/autotest_common.sh@10 -- # set +x 00:30:26.111 13:14:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:30:26.111 13:14:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:26.111 13:14:29 -- dd/common.sh@31 -- # xtrace_disable 00:30:26.111 13:14:29 -- common/autotest_common.sh@10 -- # set +x 00:30:26.111 [2024-04-17 13:14:29.961746] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:26.111 [2024-04-17 13:14:29.962240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143139 ] 00:30:26.111 { 00:30:26.111 "subsystems": [ 00:30:26.111 { 00:30:26.111 "subsystem": "bdev", 00:30:26.111 "config": [ 00:30:26.111 { 00:30:26.111 "params": { 00:30:26.111 "trtype": "pcie", 00:30:26.111 "traddr": "0000:00:10.0", 00:30:26.111 "name": "Nvme0" 00:30:26.111 }, 00:30:26.111 "method": "bdev_nvme_attach_controller" 00:30:26.111 }, 00:30:26.111 { 00:30:26.111 "method": "bdev_wait_for_examine" 00:30:26.111 } 00:30:26.111 ] 00:30:26.111 } 00:30:26.111 ] 00:30:26.111 } 00:30:26.111 [2024-04-17 13:14:30.132869] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.369 [2024-04-17 13:14:30.342408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.003  Copying: 56/56 [kB] (average 54 MBps) 00:30:28.003 00:30:28.003 13:14:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:30:28.003 13:14:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:28.003 13:14:31 -- dd/common.sh@31 -- # xtrace_disable 00:30:28.003 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:30:28.003 { 00:30:28.003 "subsystems": [ 00:30:28.003 { 00:30:28.003 "subsystem": "bdev", 00:30:28.003 "config": [ 00:30:28.003 { 00:30:28.003 "params": { 00:30:28.003 "trtype": "pcie", 00:30:28.003 "traddr": "0000:00:10.0", 00:30:28.003 "name": "Nvme0" 00:30:28.003 }, 00:30:28.003 "method": "bdev_nvme_attach_controller" 00:30:28.003 }, 00:30:28.003 { 00:30:28.003 "method": "bdev_wait_for_examine" 00:30:28.003 } 00:30:28.003 ] 00:30:28.003 } 00:30:28.003 ] 00:30:28.003 } 00:30:28.003 [2024-04-17 13:14:31.976356] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:28.003 [2024-04-17 13:14:31.976872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143170 ] 00:30:28.260 [2024-04-17 13:14:32.162974] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.260 [2024-04-17 13:14:32.389307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.204  Copying: 56/56 [kB] (average 54 MBps) 00:30:30.204 00:30:30.204 13:14:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:30.204 13:14:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:30.204 13:14:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:30.204 13:14:34 -- dd/common.sh@11 -- # local nvme_ref= 00:30:30.204 13:14:34 -- dd/common.sh@12 -- # local size=57344 00:30:30.204 13:14:34 -- dd/common.sh@14 -- # local bs=1048576 00:30:30.204 13:14:34 -- dd/common.sh@15 -- # local count=1 00:30:30.204 13:14:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:30.204 13:14:34 -- dd/common.sh@18 -- # gen_conf 00:30:30.204 13:14:34 -- dd/common.sh@31 -- # xtrace_disable 00:30:30.204 13:14:34 -- common/autotest_common.sh@10 -- # set +x 00:30:30.204 { 00:30:30.204 "subsystems": [ 00:30:30.204 { 00:30:30.204 "subsystem": "bdev", 00:30:30.204 "config": [ 00:30:30.204 { 00:30:30.204 "params": { 00:30:30.204 "trtype": "pcie", 00:30:30.204 "traddr": "0000:00:10.0", 00:30:30.204 "name": "Nvme0" 00:30:30.204 }, 00:30:30.204 "method": "bdev_nvme_attach_controller" 00:30:30.204 }, 00:30:30.204 { 00:30:30.204 "method": "bdev_wait_for_examine" 00:30:30.204 } 00:30:30.204 ] 00:30:30.204 } 00:30:30.204 ] 00:30:30.204 } 00:30:30.204 [2024-04-17 13:14:34.077516] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:30.204 [2024-04-17 13:14:34.077866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143219 ] 00:30:30.204 [2024-04-17 13:14:34.246441] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.463 [2024-04-17 13:14:34.511470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.490  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:32.490 00:30:32.490 13:14:36 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:32.490 13:14:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:32.490 13:14:36 -- dd/basic_rw.sh@23 -- # count=3 00:30:32.490 13:14:36 -- dd/basic_rw.sh@24 -- # count=3 00:30:32.490 13:14:36 -- dd/basic_rw.sh@25 -- # size=49152 00:30:32.490 13:14:36 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:30:32.490 13:14:36 -- dd/common.sh@98 -- # xtrace_disable 00:30:32.490 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:30:32.747 13:14:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:30:32.747 13:14:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:32.747 13:14:36 -- dd/common.sh@31 -- # xtrace_disable 00:30:32.747 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:30:33.005 [2024-04-17 13:14:36.915716] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:33.005 [2024-04-17 13:14:36.916151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143258 ] 00:30:33.005 { 00:30:33.005 "subsystems": [ 00:30:33.005 { 00:30:33.005 "subsystem": "bdev", 00:30:33.005 "config": [ 00:30:33.005 { 00:30:33.005 "params": { 00:30:33.005 "trtype": "pcie", 00:30:33.005 "traddr": "0000:00:10.0", 00:30:33.005 "name": "Nvme0" 00:30:33.005 }, 00:30:33.006 "method": "bdev_nvme_attach_controller" 00:30:33.006 }, 00:30:33.006 { 00:30:33.006 "method": "bdev_wait_for_examine" 00:30:33.006 } 00:30:33.006 ] 00:30:33.006 } 00:30:33.006 ] 00:30:33.006 } 00:30:33.006 [2024-04-17 13:14:37.089971] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.264 [2024-04-17 13:14:37.338461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.766  Copying: 48/48 [kB] (average 46 MBps) 00:30:34.766 00:30:34.766 13:14:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:30:34.766 13:14:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:34.766 13:14:38 -- dd/common.sh@31 -- # xtrace_disable 00:30:34.766 13:14:38 -- common/autotest_common.sh@10 -- # set +x 00:30:35.025 { 00:30:35.025 "subsystems": [ 00:30:35.025 { 00:30:35.025 "subsystem": "bdev", 00:30:35.025 "config": [ 00:30:35.025 { 00:30:35.025 "params": { 00:30:35.025 "trtype": "pcie", 00:30:35.025 "traddr": "0000:00:10.0", 00:30:35.025 "name": "Nvme0" 00:30:35.025 }, 00:30:35.025 "method": "bdev_nvme_attach_controller" 00:30:35.025 }, 00:30:35.025 { 00:30:35.025 "method": "bdev_wait_for_examine" 00:30:35.025 } 00:30:35.025 ] 00:30:35.025 } 00:30:35.025 ] 00:30:35.025 } 00:30:35.025 [2024-04-17 13:14:38.948302] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:35.025 [2024-04-17 13:14:38.948659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143285 ] 00:30:35.025 [2024-04-17 13:14:39.123287] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.284 [2024-04-17 13:14:39.386279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.229  Copying: 48/48 [kB] (average 46 MBps) 00:30:37.229 00:30:37.229 13:14:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:37.229 13:14:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:30:37.229 13:14:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:37.229 13:14:41 -- dd/common.sh@11 -- # local nvme_ref= 00:30:37.229 13:14:41 -- dd/common.sh@12 -- # local size=49152 00:30:37.229 13:14:41 -- dd/common.sh@14 -- # local bs=1048576 00:30:37.229 13:14:41 -- dd/common.sh@15 -- # local count=1 00:30:37.229 13:14:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:37.229 13:14:41 -- dd/common.sh@18 -- # gen_conf 00:30:37.229 13:14:41 -- dd/common.sh@31 -- # xtrace_disable 00:30:37.229 13:14:41 -- common/autotest_common.sh@10 -- # set +x 00:30:37.229 [2024-04-17 13:14:41.091971] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:37.229 [2024-04-17 13:14:41.092541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143318 ] 00:30:37.229 { 00:30:37.229 "subsystems": [ 00:30:37.229 { 00:30:37.229 "subsystem": "bdev", 00:30:37.229 "config": [ 00:30:37.229 { 00:30:37.229 "params": { 00:30:37.229 "trtype": "pcie", 00:30:37.229 "traddr": "0000:00:10.0", 00:30:37.229 "name": "Nvme0" 00:30:37.229 }, 00:30:37.229 "method": "bdev_nvme_attach_controller" 00:30:37.229 }, 00:30:37.229 { 00:30:37.229 "method": "bdev_wait_for_examine" 00:30:37.229 } 00:30:37.229 ] 00:30:37.229 } 00:30:37.229 ] 00:30:37.229 } 00:30:37.229 [2024-04-17 13:14:41.261569] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.488 [2024-04-17 13:14:41.493400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.991  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:38.991 00:30:38.991 13:14:43 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:38.991 13:14:43 -- dd/basic_rw.sh@23 -- # count=3 00:30:38.991 13:14:43 -- dd/basic_rw.sh@24 -- # count=3 00:30:38.991 13:14:43 -- dd/basic_rw.sh@25 -- # size=49152 00:30:38.991 13:14:43 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:30:38.991 13:14:43 -- dd/common.sh@98 -- # xtrace_disable 00:30:38.991 13:14:43 -- common/autotest_common.sh@10 -- # set +x 00:30:39.929 13:14:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:30:39.929 13:14:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:39.929 13:14:43 -- dd/common.sh@31 -- # xtrace_disable 00:30:39.929 13:14:43 -- common/autotest_common.sh@10 -- # set +x 00:30:39.929 [2024-04-17 13:14:43.851379] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:39.929 [2024-04-17 13:14:43.851736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143370 ] 00:30:39.929 { 00:30:39.929 "subsystems": [ 00:30:39.929 { 00:30:39.929 "subsystem": "bdev", 00:30:39.929 "config": [ 00:30:39.929 { 00:30:39.929 "params": { 00:30:39.929 "trtype": "pcie", 00:30:39.929 "traddr": "0000:00:10.0", 00:30:39.929 "name": "Nvme0" 00:30:39.929 }, 00:30:39.929 "method": "bdev_nvme_attach_controller" 00:30:39.929 }, 00:30:39.929 { 00:30:39.929 "method": "bdev_wait_for_examine" 00:30:39.929 } 00:30:39.929 ] 00:30:39.929 } 00:30:39.929 ] 00:30:39.929 } 00:30:39.929 [2024-04-17 13:14:44.018101] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.187 [2024-04-17 13:14:44.279333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.128  Copying: 48/48 [kB] (average 46 MBps) 00:30:42.128 00:30:42.128 13:14:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:30:42.128 13:14:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:42.128 13:14:45 -- dd/common.sh@31 -- # xtrace_disable 00:30:42.128 13:14:45 -- common/autotest_common.sh@10 -- # set +x 00:30:42.128 [2024-04-17 13:14:45.993074] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:42.128 [2024-04-17 13:14:45.993523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143404 ] 00:30:42.128 { 00:30:42.128 "subsystems": [ 00:30:42.128 { 00:30:42.128 "subsystem": "bdev", 00:30:42.128 "config": [ 00:30:42.128 { 00:30:42.128 "params": { 00:30:42.128 "trtype": "pcie", 00:30:42.128 "traddr": "0000:00:10.0", 00:30:42.128 "name": "Nvme0" 00:30:42.128 }, 00:30:42.128 "method": "bdev_nvme_attach_controller" 00:30:42.128 }, 00:30:42.128 { 00:30:42.128 "method": "bdev_wait_for_examine" 00:30:42.128 } 00:30:42.128 ] 00:30:42.128 } 00:30:42.128 ] 00:30:42.128 } 00:30:42.128 [2024-04-17 13:14:46.156172] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.387 [2024-04-17 13:14:46.374216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.022  Copying: 48/48 [kB] (average 46 MBps) 00:30:44.022 00:30:44.022 13:14:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:44.022 13:14:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:30:44.022 13:14:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:44.022 13:14:47 -- dd/common.sh@11 -- # local nvme_ref= 00:30:44.022 13:14:47 -- dd/common.sh@12 -- # local size=49152 00:30:44.022 13:14:47 -- dd/common.sh@14 -- # local bs=1048576 00:30:44.022 13:14:47 -- dd/common.sh@15 -- # local count=1 00:30:44.022 13:14:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:44.022 13:14:47 -- dd/common.sh@18 -- # gen_conf 00:30:44.022 13:14:47 -- dd/common.sh@31 -- # xtrace_disable 00:30:44.022 13:14:47 -- common/autotest_common.sh@10 -- # set +x 00:30:44.022 { 00:30:44.022 "subsystems": [ 00:30:44.022 { 00:30:44.022 "subsystem": "bdev", 00:30:44.022 "config": [ 00:30:44.022 { 00:30:44.022 "params": { 00:30:44.022 "trtype": "pcie", 00:30:44.022 "traddr": "0000:00:10.0", 00:30:44.022 "name": "Nvme0" 00:30:44.022 }, 00:30:44.022 "method": "bdev_nvme_attach_controller" 00:30:44.022 }, 00:30:44.022 { 00:30:44.022 "method": "bdev_wait_for_examine" 00:30:44.022 } 00:30:44.022 ] 00:30:44.022 } 00:30:44.022 ] 00:30:44.022 } 00:30:44.022 [2024-04-17 13:14:47.959982] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:44.022 [2024-04-17 13:14:47.960306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143436 ] 00:30:44.022 [2024-04-17 13:14:48.126996] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.280 [2024-04-17 13:14:48.349426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.229  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:46.229 00:30:46.229 ************************************ 00:30:46.229 END TEST dd_rw 00:30:46.229 ************************************ 00:30:46.229 00:30:46.229 real 0m40.072s 00:30:46.229 user 0m33.970s 00:30:46.229 sys 0m4.818s 00:30:46.229 13:14:49 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:46.229 13:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:46.229 13:14:49 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:30:46.229 13:14:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:46.229 13:14:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:46.229 13:14:49 -- common/autotest_common.sh@10 -- # set +x 00:30:46.229 ************************************ 00:30:46.229 START TEST dd_rw_offset 00:30:46.229 ************************************ 00:30:46.229 13:14:50 -- common/autotest_common.sh@1099 -- # basic_offset 00:30:46.229 13:14:50 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:30:46.229 13:14:50 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:30:46.229 13:14:50 -- dd/common.sh@98 -- # xtrace_disable 00:30:46.229 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.229 13:14:50 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:30:46.229 13:14:50 -- dd/basic_rw.sh@56 -- # data=imxg8n069hwgvl68bzp35ne84m9qkrhi10cc8aufidyw21wpycw4r97q1s4h6pdufqi5520f02kjcywzklvdo7yzv0e8gmbwpbprd5goq3zwbev5avymu4ns446isvh61j9qsu1bsae6xtw7jampkihfuh7izv5wmkew7m3exote75qatny8zty2ds94b9c8tyc3cmy7ecsk60iubobxa9qx3xy3cjhlfec564abmujh8dzxm82e727vmyslp8cu9tg241kol6j0d0zjm20gp0m37tn0fvujqhlbb6003fyqh8xaock8390pup81l7vfiswnsq5r9rsnewmilfch3o3grfp6dog9hfzpvehsktrrggz27k76cb9wj08d0yiptyykv2t5xqcrzn8k6c9icvozwgubue3w1bomo4jztptl7xzl49jiuh9qnzpk8z859ufm0hszv1prhiq2f37yoe8pobqp75uks884qjq4feojqwa4qy11iqkhcobv17b80sdpi4tingtny1zfzd6ko6akpz1keyvh8v8tnogty7kghns7yv7fhxjnusugz6a8caxutrams9yntggijsf83r86uguupievt9outg0q332zvos6f2lb2c3mypbpu6ldrssl8o106zc0fy2vs85kt404fj505ko1gaamwntxr0i8yltsdzn2v5ndurc4vx0vsymch6vnw49yz4dz18yspofkywvdsde5pzp4zkeaqauzrtwx2lylu4yb4o34lgpaqd91nfdgh81prkeblcq1y37k7o08u52mcobj4f5gt33z6dfgrgbxrfxbu7ya1mqcg4bgmwgkjh7qp6yvnegalruo9a20ted5frvehi22rdr1bzt3c4oo2ee5ci1icwu0ndlsdppml42ahf5dpx88o555kdhzy5hpaqyk5pz5ni3btrarqbpw851qgaujykg0yokztvtttd8eh7kur6qeqgzpovoaaot9suvgv7zt4aua4ldim3k31i36umnl753ou7sqj2uhg2jrjnnuv2j0arf1nfbtrksy54h3n3w8lt5j0mpmpg2r96dw3r4synlv4fpovdt5gw77q0fxd6npvzq6zuf3nfd1npi2jjdw01ymzf860ywvdjim6saqla63qzdasni15u0ppap7azmlr424sfc6hstmjrk62twi3jp97nepw2gc2ectzb656ad1tpn16t0eyf5e1jr4muqm5tet4461mtunannacbwe4djp971ug3s4yqxdaentlbvs4dsthd0valc57sn8xxds6cvg8le3k1llzbmqbal62cjmtucrnc6z7udxnyf52fg1uckreb3gdxch6is8z9trybrgtub5qkvk3e1l8p7vnirk0sztz5clwtu0dtglgga6f3bwj1nlkywe7gwnobohjlphncug4hifrmecbziimiqxuh3gxsaxujzgvo5fifi4jfzsqrr7kzev591ugg1wz46st0n3ii71bmfy8samcqop3lgvpzpxp6ejuw2v3xkh59112g9zz9fw2kjy5xz3fe8ex3uax90yezvqauz89x4ialta56ysu1gld13smt5yge830wcis9u24bfcfdg5z3g7xr68kn1oojzv90l5jfvcbm0npoto21qwltp12gm04n31hp0ic57fc2lza7yvmubw3vihamh6n8buab19uoayjpq0zouw5ufw6nz20xgjyq1xq3qt1cc45gs97ysn6q1m6yvsy8x3kl63zqc91md0pa4qrf0g11phz99ii8zuegbda4xz28i9pulwpsy4bsypy5lx05xd4egkgxw7uzruyf3nto3osxipif73btl9w6yv1inuumoqklcof65r27fephyrn1wgy6onhqe0maleabpjecyobq9137h9gyguge4fg644tnrn7keg5xs7le5k0xojgcn4fcw0h9kh9d3v3trb7bmbdnexhncnjppcyfc9soedarnfsegoc3ns7xvk6s9k8wc2kb7pcx9k1y18hfx3e89t51nthyhpi8l2e5so76k2mih13dcm5fno978kowb9hwsj2gq5bfrlg69f5asd5e02spmmu5wy47lh0ygm2ndz6s7fqotxm1ndsznzvbwar1el1hcaxgm3do10j137fovw0ymg23k4g5hi0kvtx60y12ex1oui4btq12cdt06sva9aepr23ek66e30w5tt0lek3raqh13jjxupml7xbvoiqct5sk174htkha1n46boy55hnmc3bzduqyr95s2kmi9swgkazxabgkbxuz33auh5vxaagjgawrhyhhlz70im89c6zd4elpmppaegcwca0444sapyt28relz0jtults95vslbycjmxo2g8si667mmcbk2cqs4ya59mjf38w4v7it4id5772aispa7ck1xvex2edk3kh4cznta18idh56g84masytx66fudiab6s9opudx7axh4olb9ik5qk7qyac74igtzq25z19d5um2fixtdkcor5mrk1n6zznodku9f78f7rzpe2yeg8eux7x3veh69k0e9rli587ierxc4pgobm1by0o513x8yevk47sn4uxgepr4bh13s9ji3dbqamyu6ocu56yzrtn58rzvj11m1aybqy3120juobmbh8nllwifg4hexbipeyaaztfm8pltkncaj5dfyq8fjcsme0sw7aulovpjfg5g3jv0ubembgx9hc17hnes3xfx9jgu6ebt5ouuo28255ucd7es9e400w6bbfdpyr52g9t0wuq1qba1rjxl0973dy46t9pydnjft7i2qaa988jlk5vs3ls85gp59bj494uq9hts3d92hptk0ji7gtewwyrpdd12k106c6pvm8tj0x3pt8z7tv6scwjq3tbicx7dfgsqvc1tc7nry3ut3fo5k2pp0vv6d7p19qdrn5jd9f9ql9l2ug6hkx8tun01n9wzwtznbclw9slnsm974l9kwoxxzvqk4w7od4u3mm20ncc8inig2si9kmoppfqlcsto6p2hqiqyiww1mko0tryciwkb1rlyk35fau05yyd6rqt5x3wzirz5v1w3jmrimzeq6omtuhjtacyy7dqseec2siykro6uux2kip4y63f4nk7ccl337ctqmb4l9tdzu1c5jlco1ml7tuz5tbh1iffze0ce7xb5dtem42bf76lq41dyltp7omaybvuy3kq3ch713ooho6gk5qg5kcvmlucwjisslcevrbtpqqdfo7ymh61ykkv6hhz5rqj8su89r786tzagbzyeb3pn6jo2j28xdp4na33kb1bn1j4a1l9sd3gmls5nb4130z53vho6ug6i3pkx0930rts7ocetl4z374gxhurnw12tnlab784s33kkkcrqgk6othge0y7m1ozwykbvc1hmfb9n06nqqs4qrbqpfbk4imxtxpm81rhux6k6vtjmnsjwi40tmkw2p6fqre2o546g8gbsc3wjpkw1tclbrwwbf2kzzxp12zoqi5vd4b1uekokrve6xka7v83vttkg8ghq21gec9sa3tpeq4bh5zbcq4xls9mrfrooorpk10qdq6cevxftlidw5rlmxnvnzllb6jzyei1d94g75airtkefrptnx5bu5ayt2l08pim5wnlym1mn6e6bu9a9bp1sxbl29f92mtsipnbtwktz7yresf5tkx22yd3nq9fqshz8jj3wcmmo1llatvwxtm982urijp35ezoc67tkslj9jdqfzmw4mdyfijg5m6eg871rcggqabzrgc9k1x4al32r8np5l8tw8qsn429l9vbep6etxw6j5vez5c57dx7deuorwc2798bd165r37y3bdk22vm2bwum4zkyy3ylg9zj8bemkc2w8jbe7lscubrdxylrdwqxi2zkhnelqgcjgk1x6p1obealltwqlx5zs34qhm0pfukvchlgn8sho1jpu2uux5qoer98mjq93xet2t8dmzy3z367ezcdbbd7bv4mnw9x9mogh4uz6uxrpl0bteryku5xn7pucdcarw8twt2jszjeuxtsvh1k3chxa4rk 00:30:46.229 13:14:50 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:30:46.229 13:14:50 -- dd/basic_rw.sh@59 -- # gen_conf 00:30:46.229 13:14:50 -- dd/common.sh@31 -- # xtrace_disable 00:30:46.229 13:14:50 -- common/autotest_common.sh@10 -- # set +x 00:30:46.229 [2024-04-17 13:14:50.161480] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:46.229 [2024-04-17 13:14:50.161970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143488 ] 00:30:46.229 { 00:30:46.229 "subsystems": [ 00:30:46.229 { 00:30:46.229 "subsystem": "bdev", 00:30:46.229 "config": [ 00:30:46.229 { 00:30:46.229 "params": { 00:30:46.229 "trtype": "pcie", 00:30:46.229 "traddr": "0000:00:10.0", 00:30:46.229 "name": "Nvme0" 00:30:46.229 }, 00:30:46.229 "method": "bdev_nvme_attach_controller" 00:30:46.229 }, 00:30:46.229 { 00:30:46.229 "method": "bdev_wait_for_examine" 00:30:46.229 } 00:30:46.229 ] 00:30:46.229 } 00:30:46.229 ] 00:30:46.229 } 00:30:46.229 [2024-04-17 13:14:50.344644] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.488 [2024-04-17 13:14:50.563602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.989  Copying: 4096/4096 [B] (average 4000 kBps) 00:30:47.989 00:30:47.989 13:14:52 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:30:47.989 13:14:52 -- dd/basic_rw.sh@65 -- # gen_conf 00:30:47.989 13:14:52 -- dd/common.sh@31 -- # xtrace_disable 00:30:47.989 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:30:47.989 { 00:30:47.989 "subsystems": [ 00:30:47.989 { 00:30:47.989 "subsystem": "bdev", 00:30:47.989 "config": [ 00:30:47.989 { 00:30:47.989 "params": { 00:30:47.989 "trtype": "pcie", 00:30:47.989 "traddr": "0000:00:10.0", 00:30:47.989 "name": "Nvme0" 00:30:47.989 }, 00:30:47.989 "method": "bdev_nvme_attach_controller" 00:30:47.989 }, 00:30:47.989 { 00:30:47.989 "method": "bdev_wait_for_examine" 00:30:47.989 } 00:30:47.989 ] 00:30:47.989 } 00:30:47.989 ] 00:30:47.989 } 00:30:47.989 [2024-04-17 13:14:52.128095] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:47.989 [2024-04-17 13:14:52.128628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143524 ] 00:30:48.248 [2024-04-17 13:14:52.300183] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.507 [2024-04-17 13:14:52.534466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.463  Copying: 4096/4096 [B] (average 4000 kBps) 00:30:50.463 00:30:50.463 13:14:54 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:30:50.463 13:14:54 -- dd/basic_rw.sh@72 -- # [[ imxg8n069hwgvl68bzp35ne84m9qkrhi10cc8aufidyw21wpycw4r97q1s4h6pdufqi5520f02kjcywzklvdo7yzv0e8gmbwpbprd5goq3zwbev5avymu4ns446isvh61j9qsu1bsae6xtw7jampkihfuh7izv5wmkew7m3exote75qatny8zty2ds94b9c8tyc3cmy7ecsk60iubobxa9qx3xy3cjhlfec564abmujh8dzxm82e727vmyslp8cu9tg241kol6j0d0zjm20gp0m37tn0fvujqhlbb6003fyqh8xaock8390pup81l7vfiswnsq5r9rsnewmilfch3o3grfp6dog9hfzpvehsktrrggz27k76cb9wj08d0yiptyykv2t5xqcrzn8k6c9icvozwgubue3w1bomo4jztptl7xzl49jiuh9qnzpk8z859ufm0hszv1prhiq2f37yoe8pobqp75uks884qjq4feojqwa4qy11iqkhcobv17b80sdpi4tingtny1zfzd6ko6akpz1keyvh8v8tnogty7kghns7yv7fhxjnusugz6a8caxutrams9yntggijsf83r86uguupievt9outg0q332zvos6f2lb2c3mypbpu6ldrssl8o106zc0fy2vs85kt404fj505ko1gaamwntxr0i8yltsdzn2v5ndurc4vx0vsymch6vnw49yz4dz18yspofkywvdsde5pzp4zkeaqauzrtwx2lylu4yb4o34lgpaqd91nfdgh81prkeblcq1y37k7o08u52mcobj4f5gt33z6dfgrgbxrfxbu7ya1mqcg4bgmwgkjh7qp6yvnegalruo9a20ted5frvehi22rdr1bzt3c4oo2ee5ci1icwu0ndlsdppml42ahf5dpx88o555kdhzy5hpaqyk5pz5ni3btrarqbpw851qgaujykg0yokztvtttd8eh7kur6qeqgzpovoaaot9suvgv7zt4aua4ldim3k31i36umnl753ou7sqj2uhg2jrjnnuv2j0arf1nfbtrksy54h3n3w8lt5j0mpmpg2r96dw3r4synlv4fpovdt5gw77q0fxd6npvzq6zuf3nfd1npi2jjdw01ymzf860ywvdjim6saqla63qzdasni15u0ppap7azmlr424sfc6hstmjrk62twi3jp97nepw2gc2ectzb656ad1tpn16t0eyf5e1jr4muqm5tet4461mtunannacbwe4djp971ug3s4yqxdaentlbvs4dsthd0valc57sn8xxds6cvg8le3k1llzbmqbal62cjmtucrnc6z7udxnyf52fg1uckreb3gdxch6is8z9trybrgtub5qkvk3e1l8p7vnirk0sztz5clwtu0dtglgga6f3bwj1nlkywe7gwnobohjlphncug4hifrmecbziimiqxuh3gxsaxujzgvo5fifi4jfzsqrr7kzev591ugg1wz46st0n3ii71bmfy8samcqop3lgvpzpxp6ejuw2v3xkh59112g9zz9fw2kjy5xz3fe8ex3uax90yezvqauz89x4ialta56ysu1gld13smt5yge830wcis9u24bfcfdg5z3g7xr68kn1oojzv90l5jfvcbm0npoto21qwltp12gm04n31hp0ic57fc2lza7yvmubw3vihamh6n8buab19uoayjpq0zouw5ufw6nz20xgjyq1xq3qt1cc45gs97ysn6q1m6yvsy8x3kl63zqc91md0pa4qrf0g11phz99ii8zuegbda4xz28i9pulwpsy4bsypy5lx05xd4egkgxw7uzruyf3nto3osxipif73btl9w6yv1inuumoqklcof65r27fephyrn1wgy6onhqe0maleabpjecyobq9137h9gyguge4fg644tnrn7keg5xs7le5k0xojgcn4fcw0h9kh9d3v3trb7bmbdnexhncnjppcyfc9soedarnfsegoc3ns7xvk6s9k8wc2kb7pcx9k1y18hfx3e89t51nthyhpi8l2e5so76k2mih13dcm5fno978kowb9hwsj2gq5bfrlg69f5asd5e02spmmu5wy47lh0ygm2ndz6s7fqotxm1ndsznzvbwar1el1hcaxgm3do10j137fovw0ymg23k4g5hi0kvtx60y12ex1oui4btq12cdt06sva9aepr23ek66e30w5tt0lek3raqh13jjxupml7xbvoiqct5sk174htkha1n46boy55hnmc3bzduqyr95s2kmi9swgkazxabgkbxuz33auh5vxaagjgawrhyhhlz70im89c6zd4elpmppaegcwca0444sapyt28relz0jtults95vslbycjmxo2g8si667mmcbk2cqs4ya59mjf38w4v7it4id5772aispa7ck1xvex2edk3kh4cznta18idh56g84masytx66fudiab6s9opudx7axh4olb9ik5qk7qyac74igtzq25z19d5um2fixtdkcor5mrk1n6zznodku9f78f7rzpe2yeg8eux7x3veh69k0e9rli587ierxc4pgobm1by0o513x8yevk47sn4uxgepr4bh13s9ji3dbqamyu6ocu56yzrtn58rzvj11m1aybqy3120juobmbh8nllwifg4hexbipeyaaztfm8pltkncaj5dfyq8fjcsme0sw7aulovpjfg5g3jv0ubembgx9hc17hnes3xfx9jgu6ebt5ouuo28255ucd7es9e400w6bbfdpyr52g9t0wuq1qba1rjxl0973dy46t9pydnjft7i2qaa988jlk5vs3ls85gp59bj494uq9hts3d92hptk0ji7gtewwyrpdd12k106c6pvm8tj0x3pt8z7tv6scwjq3tbicx7dfgsqvc1tc7nry3ut3fo5k2pp0vv6d7p19qdrn5jd9f9ql9l2ug6hkx8tun01n9wzwtznbclw9slnsm974l9kwoxxzvqk4w7od4u3mm20ncc8inig2si9kmoppfqlcsto6p2hqiqyiww1mko0tryciwkb1rlyk35fau05yyd6rqt5x3wzirz5v1w3jmrimzeq6omtuhjtacyy7dqseec2siykro6uux2kip4y63f4nk7ccl337ctqmb4l9tdzu1c5jlco1ml7tuz5tbh1iffze0ce7xb5dtem42bf76lq41dyltp7omaybvuy3kq3ch713ooho6gk5qg5kcvmlucwjisslcevrbtpqqdfo7ymh61ykkv6hhz5rqj8su89r786tzagbzyeb3pn6jo2j28xdp4na33kb1bn1j4a1l9sd3gmls5nb4130z53vho6ug6i3pkx0930rts7ocetl4z374gxhurnw12tnlab784s33kkkcrqgk6othge0y7m1ozwykbvc1hmfb9n06nqqs4qrbqpfbk4imxtxpm81rhux6k6vtjmnsjwi40tmkw2p6fqre2o546g8gbsc3wjpkw1tclbrwwbf2kzzxp12zoqi5vd4b1uekokrve6xka7v83vttkg8ghq21gec9sa3tpeq4bh5zbcq4xls9mrfrooorpk10qdq6cevxftlidw5rlmxnvnzllb6jzyei1d94g75airtkefrptnx5bu5ayt2l08pim5wnlym1mn6e6bu9a9bp1sxbl29f92mtsipnbtwktz7yresf5tkx22yd3nq9fqshz8jj3wcmmo1llatvwxtm982urijp35ezoc67tkslj9jdqfzmw4mdyfijg5m6eg871rcggqabzrgc9k1x4al32r8np5l8tw8qsn429l9vbep6etxw6j5vez5c57dx7deuorwc2798bd165r37y3bdk22vm2bwum4zkyy3ylg9zj8bemkc2w8jbe7lscubrdxylrdwqxi2zkhnelqgcjgk1x6p1obealltwqlx5zs34qhm0pfukvchlgn8sho1jpu2uux5qoer98mjq93xet2t8dmzy3z367ezcdbbd7bv4mnw9x9mogh4uz6uxrpl0bteryku5xn7pucdcarw8twt2jszjeuxtsvh1k3chxa4rk == \i\m\x\g\8\n\0\6\9\h\w\g\v\l\6\8\b\z\p\3\5\n\e\8\4\m\9\q\k\r\h\i\1\0\c\c\8\a\u\f\i\d\y\w\2\1\w\p\y\c\w\4\r\9\7\q\1\s\4\h\6\p\d\u\f\q\i\5\5\2\0\f\0\2\k\j\c\y\w\z\k\l\v\d\o\7\y\z\v\0\e\8\g\m\b\w\p\b\p\r\d\5\g\o\q\3\z\w\b\e\v\5\a\v\y\m\u\4\n\s\4\4\6\i\s\v\h\6\1\j\9\q\s\u\1\b\s\a\e\6\x\t\w\7\j\a\m\p\k\i\h\f\u\h\7\i\z\v\5\w\m\k\e\w\7\m\3\e\x\o\t\e\7\5\q\a\t\n\y\8\z\t\y\2\d\s\9\4\b\9\c\8\t\y\c\3\c\m\y\7\e\c\s\k\6\0\i\u\b\o\b\x\a\9\q\x\3\x\y\3\c\j\h\l\f\e\c\5\6\4\a\b\m\u\j\h\8\d\z\x\m\8\2\e\7\2\7\v\m\y\s\l\p\8\c\u\9\t\g\2\4\1\k\o\l\6\j\0\d\0\z\j\m\2\0\g\p\0\m\3\7\t\n\0\f\v\u\j\q\h\l\b\b\6\0\0\3\f\y\q\h\8\x\a\o\c\k\8\3\9\0\p\u\p\8\1\l\7\v\f\i\s\w\n\s\q\5\r\9\r\s\n\e\w\m\i\l\f\c\h\3\o\3\g\r\f\p\6\d\o\g\9\h\f\z\p\v\e\h\s\k\t\r\r\g\g\z\2\7\k\7\6\c\b\9\w\j\0\8\d\0\y\i\p\t\y\y\k\v\2\t\5\x\q\c\r\z\n\8\k\6\c\9\i\c\v\o\z\w\g\u\b\u\e\3\w\1\b\o\m\o\4\j\z\t\p\t\l\7\x\z\l\4\9\j\i\u\h\9\q\n\z\p\k\8\z\8\5\9\u\f\m\0\h\s\z\v\1\p\r\h\i\q\2\f\3\7\y\o\e\8\p\o\b\q\p\7\5\u\k\s\8\8\4\q\j\q\4\f\e\o\j\q\w\a\4\q\y\1\1\i\q\k\h\c\o\b\v\1\7\b\8\0\s\d\p\i\4\t\i\n\g\t\n\y\1\z\f\z\d\6\k\o\6\a\k\p\z\1\k\e\y\v\h\8\v\8\t\n\o\g\t\y\7\k\g\h\n\s\7\y\v\7\f\h\x\j\n\u\s\u\g\z\6\a\8\c\a\x\u\t\r\a\m\s\9\y\n\t\g\g\i\j\s\f\8\3\r\8\6\u\g\u\u\p\i\e\v\t\9\o\u\t\g\0\q\3\3\2\z\v\o\s\6\f\2\l\b\2\c\3\m\y\p\b\p\u\6\l\d\r\s\s\l\8\o\1\0\6\z\c\0\f\y\2\v\s\8\5\k\t\4\0\4\f\j\5\0\5\k\o\1\g\a\a\m\w\n\t\x\r\0\i\8\y\l\t\s\d\z\n\2\v\5\n\d\u\r\c\4\v\x\0\v\s\y\m\c\h\6\v\n\w\4\9\y\z\4\d\z\1\8\y\s\p\o\f\k\y\w\v\d\s\d\e\5\p\z\p\4\z\k\e\a\q\a\u\z\r\t\w\x\2\l\y\l\u\4\y\b\4\o\3\4\l\g\p\a\q\d\9\1\n\f\d\g\h\8\1\p\r\k\e\b\l\c\q\1\y\3\7\k\7\o\0\8\u\5\2\m\c\o\b\j\4\f\5\g\t\3\3\z\6\d\f\g\r\g\b\x\r\f\x\b\u\7\y\a\1\m\q\c\g\4\b\g\m\w\g\k\j\h\7\q\p\6\y\v\n\e\g\a\l\r\u\o\9\a\2\0\t\e\d\5\f\r\v\e\h\i\2\2\r\d\r\1\b\z\t\3\c\4\o\o\2\e\e\5\c\i\1\i\c\w\u\0\n\d\l\s\d\p\p\m\l\4\2\a\h\f\5\d\p\x\8\8\o\5\5\5\k\d\h\z\y\5\h\p\a\q\y\k\5\p\z\5\n\i\3\b\t\r\a\r\q\b\p\w\8\5\1\q\g\a\u\j\y\k\g\0\y\o\k\z\t\v\t\t\t\d\8\e\h\7\k\u\r\6\q\e\q\g\z\p\o\v\o\a\a\o\t\9\s\u\v\g\v\7\z\t\4\a\u\a\4\l\d\i\m\3\k\3\1\i\3\6\u\m\n\l\7\5\3\o\u\7\s\q\j\2\u\h\g\2\j\r\j\n\n\u\v\2\j\0\a\r\f\1\n\f\b\t\r\k\s\y\5\4\h\3\n\3\w\8\l\t\5\j\0\m\p\m\p\g\2\r\9\6\d\w\3\r\4\s\y\n\l\v\4\f\p\o\v\d\t\5\g\w\7\7\q\0\f\x\d\6\n\p\v\z\q\6\z\u\f\3\n\f\d\1\n\p\i\2\j\j\d\w\0\1\y\m\z\f\8\6\0\y\w\v\d\j\i\m\6\s\a\q\l\a\6\3\q\z\d\a\s\n\i\1\5\u\0\p\p\a\p\7\a\z\m\l\r\4\2\4\s\f\c\6\h\s\t\m\j\r\k\6\2\t\w\i\3\j\p\9\7\n\e\p\w\2\g\c\2\e\c\t\z\b\6\5\6\a\d\1\t\p\n\1\6\t\0\e\y\f\5\e\1\j\r\4\m\u\q\m\5\t\e\t\4\4\6\1\m\t\u\n\a\n\n\a\c\b\w\e\4\d\j\p\9\7\1\u\g\3\s\4\y\q\x\d\a\e\n\t\l\b\v\s\4\d\s\t\h\d\0\v\a\l\c\5\7\s\n\8\x\x\d\s\6\c\v\g\8\l\e\3\k\1\l\l\z\b\m\q\b\a\l\6\2\c\j\m\t\u\c\r\n\c\6\z\7\u\d\x\n\y\f\5\2\f\g\1\u\c\k\r\e\b\3\g\d\x\c\h\6\i\s\8\z\9\t\r\y\b\r\g\t\u\b\5\q\k\v\k\3\e\1\l\8\p\7\v\n\i\r\k\0\s\z\t\z\5\c\l\w\t\u\0\d\t\g\l\g\g\a\6\f\3\b\w\j\1\n\l\k\y\w\e\7\g\w\n\o\b\o\h\j\l\p\h\n\c\u\g\4\h\i\f\r\m\e\c\b\z\i\i\m\i\q\x\u\h\3\g\x\s\a\x\u\j\z\g\v\o\5\f\i\f\i\4\j\f\z\s\q\r\r\7\k\z\e\v\5\9\1\u\g\g\1\w\z\4\6\s\t\0\n\3\i\i\7\1\b\m\f\y\8\s\a\m\c\q\o\p\3\l\g\v\p\z\p\x\p\6\e\j\u\w\2\v\3\x\k\h\5\9\1\1\2\g\9\z\z\9\f\w\2\k\j\y\5\x\z\3\f\e\8\e\x\3\u\a\x\9\0\y\e\z\v\q\a\u\z\8\9\x\4\i\a\l\t\a\5\6\y\s\u\1\g\l\d\1\3\s\m\t\5\y\g\e\8\3\0\w\c\i\s\9\u\2\4\b\f\c\f\d\g\5\z\3\g\7\x\r\6\8\k\n\1\o\o\j\z\v\9\0\l\5\j\f\v\c\b\m\0\n\p\o\t\o\2\1\q\w\l\t\p\1\2\g\m\0\4\n\3\1\h\p\0\i\c\5\7\f\c\2\l\z\a\7\y\v\m\u\b\w\3\v\i\h\a\m\h\6\n\8\b\u\a\b\1\9\u\o\a\y\j\p\q\0\z\o\u\w\5\u\f\w\6\n\z\2\0\x\g\j\y\q\1\x\q\3\q\t\1\c\c\4\5\g\s\9\7\y\s\n\6\q\1\m\6\y\v\s\y\8\x\3\k\l\6\3\z\q\c\9\1\m\d\0\p\a\4\q\r\f\0\g\1\1\p\h\z\9\9\i\i\8\z\u\e\g\b\d\a\4\x\z\2\8\i\9\p\u\l\w\p\s\y\4\b\s\y\p\y\5\l\x\0\5\x\d\4\e\g\k\g\x\w\7\u\z\r\u\y\f\3\n\t\o\3\o\s\x\i\p\i\f\7\3\b\t\l\9\w\6\y\v\1\i\n\u\u\m\o\q\k\l\c\o\f\6\5\r\2\7\f\e\p\h\y\r\n\1\w\g\y\6\o\n\h\q\e\0\m\a\l\e\a\b\p\j\e\c\y\o\b\q\9\1\3\7\h\9\g\y\g\u\g\e\4\f\g\6\4\4\t\n\r\n\7\k\e\g\5\x\s\7\l\e\5\k\0\x\o\j\g\c\n\4\f\c\w\0\h\9\k\h\9\d\3\v\3\t\r\b\7\b\m\b\d\n\e\x\h\n\c\n\j\p\p\c\y\f\c\9\s\o\e\d\a\r\n\f\s\e\g\o\c\3\n\s\7\x\v\k\6\s\9\k\8\w\c\2\k\b\7\p\c\x\9\k\1\y\1\8\h\f\x\3\e\8\9\t\5\1\n\t\h\y\h\p\i\8\l\2\e\5\s\o\7\6\k\2\m\i\h\1\3\d\c\m\5\f\n\o\9\7\8\k\o\w\b\9\h\w\s\j\2\g\q\5\b\f\r\l\g\6\9\f\5\a\s\d\5\e\0\2\s\p\m\m\u\5\w\y\4\7\l\h\0\y\g\m\2\n\d\z\6\s\7\f\q\o\t\x\m\1\n\d\s\z\n\z\v\b\w\a\r\1\e\l\1\h\c\a\x\g\m\3\d\o\1\0\j\1\3\7\f\o\v\w\0\y\m\g\2\3\k\4\g\5\h\i\0\k\v\t\x\6\0\y\1\2\e\x\1\o\u\i\4\b\t\q\1\2\c\d\t\0\6\s\v\a\9\a\e\p\r\2\3\e\k\6\6\e\3\0\w\5\t\t\0\l\e\k\3\r\a\q\h\1\3\j\j\x\u\p\m\l\7\x\b\v\o\i\q\c\t\5\s\k\1\7\4\h\t\k\h\a\1\n\4\6\b\o\y\5\5\h\n\m\c\3\b\z\d\u\q\y\r\9\5\s\2\k\m\i\9\s\w\g\k\a\z\x\a\b\g\k\b\x\u\z\3\3\a\u\h\5\v\x\a\a\g\j\g\a\w\r\h\y\h\h\l\z\7\0\i\m\8\9\c\6\z\d\4\e\l\p\m\p\p\a\e\g\c\w\c\a\0\4\4\4\s\a\p\y\t\2\8\r\e\l\z\0\j\t\u\l\t\s\9\5\v\s\l\b\y\c\j\m\x\o\2\g\8\s\i\6\6\7\m\m\c\b\k\2\c\q\s\4\y\a\5\9\m\j\f\3\8\w\4\v\7\i\t\4\i\d\5\7\7\2\a\i\s\p\a\7\c\k\1\x\v\e\x\2\e\d\k\3\k\h\4\c\z\n\t\a\1\8\i\d\h\5\6\g\8\4\m\a\s\y\t\x\6\6\f\u\d\i\a\b\6\s\9\o\p\u\d\x\7\a\x\h\4\o\l\b\9\i\k\5\q\k\7\q\y\a\c\7\4\i\g\t\z\q\2\5\z\1\9\d\5\u\m\2\f\i\x\t\d\k\c\o\r\5\m\r\k\1\n\6\z\z\n\o\d\k\u\9\f\7\8\f\7\r\z\p\e\2\y\e\g\8\e\u\x\7\x\3\v\e\h\6\9\k\0\e\9\r\l\i\5\8\7\i\e\r\x\c\4\p\g\o\b\m\1\b\y\0\o\5\1\3\x\8\y\e\v\k\4\7\s\n\4\u\x\g\e\p\r\4\b\h\1\3\s\9\j\i\3\d\b\q\a\m\y\u\6\o\c\u\5\6\y\z\r\t\n\5\8\r\z\v\j\1\1\m\1\a\y\b\q\y\3\1\2\0\j\u\o\b\m\b\h\8\n\l\l\w\i\f\g\4\h\e\x\b\i\p\e\y\a\a\z\t\f\m\8\p\l\t\k\n\c\a\j\5\d\f\y\q\8\f\j\c\s\m\e\0\s\w\7\a\u\l\o\v\p\j\f\g\5\g\3\j\v\0\u\b\e\m\b\g\x\9\h\c\1\7\h\n\e\s\3\x\f\x\9\j\g\u\6\e\b\t\5\o\u\u\o\2\8\2\5\5\u\c\d\7\e\s\9\e\4\0\0\w\6\b\b\f\d\p\y\r\5\2\g\9\t\0\w\u\q\1\q\b\a\1\r\j\x\l\0\9\7\3\d\y\4\6\t\9\p\y\d\n\j\f\t\7\i\2\q\a\a\9\8\8\j\l\k\5\v\s\3\l\s\8\5\g\p\5\9\b\j\4\9\4\u\q\9\h\t\s\3\d\9\2\h\p\t\k\0\j\i\7\g\t\e\w\w\y\r\p\d\d\1\2\k\1\0\6\c\6\p\v\m\8\t\j\0\x\3\p\t\8\z\7\t\v\6\s\c\w\j\q\3\t\b\i\c\x\7\d\f\g\s\q\v\c\1\t\c\7\n\r\y\3\u\t\3\f\o\5\k\2\p\p\0\v\v\6\d\7\p\1\9\q\d\r\n\5\j\d\9\f\9\q\l\9\l\2\u\g\6\h\k\x\8\t\u\n\0\1\n\9\w\z\w\t\z\n\b\c\l\w\9\s\l\n\s\m\9\7\4\l\9\k\w\o\x\x\z\v\q\k\4\w\7\o\d\4\u\3\m\m\2\0\n\c\c\8\i\n\i\g\2\s\i\9\k\m\o\p\p\f\q\l\c\s\t\o\6\p\2\h\q\i\q\y\i\w\w\1\m\k\o\0\t\r\y\c\i\w\k\b\1\r\l\y\k\3\5\f\a\u\0\5\y\y\d\6\r\q\t\5\x\3\w\z\i\r\z\5\v\1\w\3\j\m\r\i\m\z\e\q\6\o\m\t\u\h\j\t\a\c\y\y\7\d\q\s\e\e\c\2\s\i\y\k\r\o\6\u\u\x\2\k\i\p\4\y\6\3\f\4\n\k\7\c\c\l\3\3\7\c\t\q\m\b\4\l\9\t\d\z\u\1\c\5\j\l\c\o\1\m\l\7\t\u\z\5\t\b\h\1\i\f\f\z\e\0\c\e\7\x\b\5\d\t\e\m\4\2\b\f\7\6\l\q\4\1\d\y\l\t\p\7\o\m\a\y\b\v\u\y\3\k\q\3\c\h\7\1\3\o\o\h\o\6\g\k\5\q\g\5\k\c\v\m\l\u\c\w\j\i\s\s\l\c\e\v\r\b\t\p\q\q\d\f\o\7\y\m\h\6\1\y\k\k\v\6\h\h\z\5\r\q\j\8\s\u\8\9\r\7\8\6\t\z\a\g\b\z\y\e\b\3\p\n\6\j\o\2\j\2\8\x\d\p\4\n\a\3\3\k\b\1\b\n\1\j\4\a\1\l\9\s\d\3\g\m\l\s\5\n\b\4\1\3\0\z\5\3\v\h\o\6\u\g\6\i\3\p\k\x\0\9\3\0\r\t\s\7\o\c\e\t\l\4\z\3\7\4\g\x\h\u\r\n\w\1\2\t\n\l\a\b\7\8\4\s\3\3\k\k\k\c\r\q\g\k\6\o\t\h\g\e\0\y\7\m\1\o\z\w\y\k\b\v\c\1\h\m\f\b\9\n\0\6\n\q\q\s\4\q\r\b\q\p\f\b\k\4\i\m\x\t\x\p\m\8\1\r\h\u\x\6\k\6\v\t\j\m\n\s\j\w\i\4\0\t\m\k\w\2\p\6\f\q\r\e\2\o\5\4\6\g\8\g\b\s\c\3\w\j\p\k\w\1\t\c\l\b\r\w\w\b\f\2\k\z\z\x\p\1\2\z\o\q\i\5\v\d\4\b\1\u\e\k\o\k\r\v\e\6\x\k\a\7\v\8\3\v\t\t\k\g\8\g\h\q\2\1\g\e\c\9\s\a\3\t\p\e\q\4\b\h\5\z\b\c\q\4\x\l\s\9\m\r\f\r\o\o\o\r\p\k\1\0\q\d\q\6\c\e\v\x\f\t\l\i\d\w\5\r\l\m\x\n\v\n\z\l\l\b\6\j\z\y\e\i\1\d\9\4\g\7\5\a\i\r\t\k\e\f\r\p\t\n\x\5\b\u\5\a\y\t\2\l\0\8\p\i\m\5\w\n\l\y\m\1\m\n\6\e\6\b\u\9\a\9\b\p\1\s\x\b\l\2\9\f\9\2\m\t\s\i\p\n\b\t\w\k\t\z\7\y\r\e\s\f\5\t\k\x\2\2\y\d\3\n\q\9\f\q\s\h\z\8\j\j\3\w\c\m\m\o\1\l\l\a\t\v\w\x\t\m\9\8\2\u\r\i\j\p\3\5\e\z\o\c\6\7\t\k\s\l\j\9\j\d\q\f\z\m\w\4\m\d\y\f\i\j\g\5\m\6\e\g\8\7\1\r\c\g\g\q\a\b\z\r\g\c\9\k\1\x\4\a\l\3\2\r\8\n\p\5\l\8\t\w\8\q\s\n\4\2\9\l\9\v\b\e\p\6\e\t\x\w\6\j\5\v\e\z\5\c\5\7\d\x\7\d\e\u\o\r\w\c\2\7\9\8\b\d\1\6\5\r\3\7\y\3\b\d\k\2\2\v\m\2\b\w\u\m\4\z\k\y\y\3\y\l\g\9\z\j\8\b\e\m\k\c\2\w\8\j\b\e\7\l\s\c\u\b\r\d\x\y\l\r\d\w\q\x\i\2\z\k\h\n\e\l\q\g\c\j\g\k\1\x\6\p\1\o\b\e\a\l\l\t\w\q\l\x\5\z\s\3\4\q\h\m\0\p\f\u\k\v\c\h\l\g\n\8\s\h\o\1\j\p\u\2\u\u\x\5\q\o\e\r\9\8\m\j\q\9\3\x\e\t\2\t\8\d\m\z\y\3\z\3\6\7\e\z\c\d\b\b\d\7\b\v\4\m\n\w\9\x\9\m\o\g\h\4\u\z\6\u\x\r\p\l\0\b\t\e\r\y\k\u\5\x\n\7\p\u\c\d\c\a\r\w\8\t\w\t\2\j\s\z\j\e\u\x\t\s\v\h\1\k\3\c\h\x\a\4\r\k ]] 00:30:50.463 00:30:50.463 real 0m4.286s 00:30:50.463 user 0m3.562s 00:30:50.463 sys 0m0.576s 00:30:50.463 13:14:54 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:50.463 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:30:50.463 ************************************ 00:30:50.463 END TEST dd_rw_offset 00:30:50.463 ************************************ 00:30:50.464 13:14:54 -- dd/basic_rw.sh@1 -- # cleanup 00:30:50.464 13:14:54 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:30:50.464 13:14:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:50.464 13:14:54 -- dd/common.sh@11 -- # local nvme_ref= 00:30:50.464 13:14:54 -- dd/common.sh@12 -- # local size=0xffff 00:30:50.464 13:14:54 -- dd/common.sh@14 -- # local bs=1048576 00:30:50.464 13:14:54 -- dd/common.sh@15 -- # local count=1 00:30:50.464 13:14:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:50.464 13:14:54 -- dd/common.sh@18 -- # gen_conf 00:30:50.464 13:14:54 -- dd/common.sh@31 -- # xtrace_disable 00:30:50.464 13:14:54 -- common/autotest_common.sh@10 -- # set +x 00:30:50.464 { 00:30:50.464 "subsystems": [ 00:30:50.464 { 00:30:50.464 "subsystem": "bdev", 00:30:50.464 "config": [ 00:30:50.464 { 00:30:50.464 "params": { 00:30:50.464 "trtype": "pcie", 00:30:50.464 "traddr": "0000:00:10.0", 00:30:50.464 "name": "Nvme0" 00:30:50.464 }, 00:30:50.464 "method": "bdev_nvme_attach_controller" 00:30:50.464 }, 00:30:50.464 { 00:30:50.464 "method": "bdev_wait_for_examine" 00:30:50.464 } 00:30:50.464 ] 00:30:50.464 } 00:30:50.464 ] 00:30:50.464 } 00:30:50.464 [2024-04-17 13:14:54.467647] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:50.464 [2024-04-17 13:14:54.468142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143584 ] 00:30:50.723 [2024-04-17 13:14:54.642929] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.982 [2024-04-17 13:14:54.891307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.617  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:52.617 00:30:52.617 13:14:56 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:52.617 ************************************ 00:30:52.617 END TEST spdk_dd_basic_rw 00:30:52.617 ************************************ 00:30:52.617 00:30:52.617 real 0m48.985s 00:30:52.617 user 0m41.228s 00:30:52.617 sys 0m6.130s 00:30:52.617 13:14:56 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:52.617 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.617 13:14:56 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:30:52.617 13:14:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:52.617 13:14:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:52.617 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.617 ************************************ 00:30:52.617 START TEST spdk_dd_posix 00:30:52.617 ************************************ 00:30:52.617 13:14:56 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:30:52.617 * Looking for test storage... 00:30:52.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:52.617 13:14:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:52.617 13:14:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.617 13:14:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.617 13:14:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.617 13:14:56 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:52.617 13:14:56 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:52.617 13:14:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:52.617 13:14:56 -- paths/export.sh@5 -- # export PATH 00:30:52.617 13:14:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:52.617 13:14:56 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:30:52.617 13:14:56 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:30:52.617 13:14:56 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:30:52.617 13:14:56 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:30:52.617 13:14:56 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:52.617 13:14:56 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:52.617 13:14:56 -- dd/posix.sh@130 -- # tests 00:30:52.617 13:14:56 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:30:52.617 * First test run, using AIO 00:30:52.617 13:14:56 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:30:52.617 13:14:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:52.617 13:14:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:52.617 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.617 ************************************ 00:30:52.617 START TEST dd_flag_append 00:30:52.617 ************************************ 00:30:52.617 13:14:56 -- common/autotest_common.sh@1099 -- # append 00:30:52.617 13:14:56 -- dd/posix.sh@16 -- # local dump0 00:30:52.617 13:14:56 -- dd/posix.sh@17 -- # local dump1 00:30:52.617 13:14:56 -- dd/posix.sh@19 -- # gen_bytes 32 00:30:52.617 13:14:56 -- dd/common.sh@98 -- # xtrace_disable 00:30:52.617 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.617 13:14:56 -- dd/posix.sh@19 -- # dump0=r502wpqntq6admkbu5ufvyv90rp3o2h7 00:30:52.618 13:14:56 -- dd/posix.sh@20 -- # gen_bytes 32 00:30:52.618 13:14:56 -- dd/common.sh@98 -- # xtrace_disable 00:30:52.618 13:14:56 -- common/autotest_common.sh@10 -- # set +x 00:30:52.618 13:14:56 -- dd/posix.sh@20 -- # dump1=rqk3bzdddw7g91vjpobakbqcxiwe92xj 00:30:52.618 13:14:56 -- dd/posix.sh@22 -- # printf %s r502wpqntq6admkbu5ufvyv90rp3o2h7 00:30:52.618 13:14:56 -- dd/posix.sh@23 -- # printf %s rqk3bzdddw7g91vjpobakbqcxiwe92xj 00:30:52.618 13:14:56 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:30:52.618 [2024-04-17 13:14:56.695794] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:52.618 [2024-04-17 13:14:56.696221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143676 ] 00:30:52.876 [2024-04-17 13:14:56.868345] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.135 [2024-04-17 13:14:57.126518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.769  Copying: 32/32 [B] (average 31 kBps) 00:30:54.769 00:30:54.769 ************************************ 00:30:54.769 END TEST dd_flag_append 00:30:54.769 ************************************ 00:30:54.769 13:14:58 -- dd/posix.sh@27 -- # [[ rqk3bzdddw7g91vjpobakbqcxiwe92xjr502wpqntq6admkbu5ufvyv90rp3o2h7 == \r\q\k\3\b\z\d\d\d\w\7\g\9\1\v\j\p\o\b\a\k\b\q\c\x\i\w\e\9\2\x\j\r\5\0\2\w\p\q\n\t\q\6\a\d\m\k\b\u\5\u\f\v\y\v\9\0\r\p\3\o\2\h\7 ]] 00:30:54.769 00:30:54.769 real 0m1.994s 00:30:54.769 user 0m1.646s 00:30:54.769 sys 0m0.213s 00:30:54.769 13:14:58 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:54.769 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:30:54.769 13:14:58 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:30:54.769 13:14:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:54.769 13:14:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:54.769 13:14:58 -- common/autotest_common.sh@10 -- # set +x 00:30:54.769 ************************************ 00:30:54.769 START TEST dd_flag_directory 00:30:54.769 ************************************ 00:30:54.769 13:14:58 -- common/autotest_common.sh@1099 -- # directory 00:30:54.769 13:14:58 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:54.769 13:14:58 -- common/autotest_common.sh@638 -- # local es=0 00:30:54.769 13:14:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:54.769 13:14:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.769 13:14:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:54.769 13:14:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.769 13:14:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:54.769 13:14:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.769 13:14:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:54.769 13:14:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.769 13:14:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:54.769 13:14:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:54.769 [2024-04-17 13:14:58.763427] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:54.769 [2024-04-17 13:14:58.763921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143734 ] 00:30:55.027 [2024-04-17 13:14:58.937288] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.027 [2024-04-17 13:14:59.137486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.627 [2024-04-17 13:14:59.456489] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:55.627 [2024-04-17 13:14:59.456839] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:55.627 [2024-04-17 13:14:59.456904] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:56.231 [2024-04-17 13:15:00.203025] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:30:56.491 13:15:00 -- common/autotest_common.sh@641 -- # es=236 00:30:56.491 13:15:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:56.491 13:15:00 -- common/autotest_common.sh@650 -- # es=108 00:30:56.491 13:15:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:56.491 13:15:00 -- common/autotest_common.sh@658 -- # es=1 00:30:56.491 13:15:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:56.491 13:15:00 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:56.491 13:15:00 -- common/autotest_common.sh@638 -- # local es=0 00:30:56.491 13:15:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:56.491 13:15:00 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:56.491 13:15:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:56.491 13:15:00 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:56.491 13:15:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:56.491 13:15:00 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:56.491 13:15:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:56.491 13:15:00 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:56.491 13:15:00 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:56.491 13:15:00 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:30:56.753 [2024-04-17 13:15:00.659610] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:56.753 [2024-04-17 13:15:00.660109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143761 ] 00:30:56.753 [2024-04-17 13:15:00.814954] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.012 [2024-04-17 13:15:01.027436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.270 [2024-04-17 13:15:01.340294] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:57.270 [2024-04-17 13:15:01.340685] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:30:57.270 [2024-04-17 13:15:01.340763] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:58.205 [2024-04-17 13:15:02.078443] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:30:58.464 ************************************ 00:30:58.464 END TEST dd_flag_directory 00:30:58.464 ************************************ 00:30:58.464 13:15:02 -- common/autotest_common.sh@641 -- # es=236 00:30:58.464 13:15:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:58.464 13:15:02 -- common/autotest_common.sh@650 -- # es=108 00:30:58.464 13:15:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:58.464 13:15:02 -- common/autotest_common.sh@658 -- # es=1 00:30:58.464 13:15:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:58.464 00:30:58.464 real 0m3.798s 00:30:58.464 user 0m3.165s 00:30:58.464 sys 0m0.429s 00:30:58.464 13:15:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:30:58.464 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:30:58.464 13:15:02 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:30:58.464 13:15:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:58.464 13:15:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:58.464 13:15:02 -- common/autotest_common.sh@10 -- # set +x 00:30:58.464 ************************************ 00:30:58.464 START TEST dd_flag_nofollow 00:30:58.464 ************************************ 00:30:58.464 13:15:02 -- common/autotest_common.sh@1099 -- # nofollow 00:30:58.464 13:15:02 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:58.464 13:15:02 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:58.464 13:15:02 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:58.464 13:15:02 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:58.464 13:15:02 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:58.464 13:15:02 -- common/autotest_common.sh@638 -- # local es=0 00:30:58.464 13:15:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:58.464 13:15:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:58.464 13:15:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:58.464 13:15:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:58.464 13:15:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:58.464 13:15:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:58.464 13:15:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:58.464 13:15:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:58.464 13:15:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:58.464 13:15:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:58.736 [2024-04-17 13:15:02.651259] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:30:58.736 [2024-04-17 13:15:02.651470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143810 ] 00:30:58.736 [2024-04-17 13:15:02.823143] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.995 [2024-04-17 13:15:03.072908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.253 [2024-04-17 13:15:03.392044] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:59.253 [2024-04-17 13:15:03.392188] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:30:59.253 [2024-04-17 13:15:03.392233] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:00.192 [2024-04-17 13:15:04.110622] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:00.450 13:15:04 -- common/autotest_common.sh@641 -- # es=216 00:31:00.450 13:15:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:00.450 13:15:04 -- common/autotest_common.sh@650 -- # es=88 00:31:00.450 13:15:04 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:00.450 13:15:04 -- common/autotest_common.sh@658 -- # es=1 00:31:00.450 13:15:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:00.450 13:15:04 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:00.450 13:15:04 -- common/autotest_common.sh@638 -- # local es=0 00:31:00.450 13:15:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:00.450 13:15:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:00.450 13:15:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:00.450 13:15:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:00.450 13:15:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:00.450 13:15:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:00.450 13:15:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:00.450 13:15:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:00.450 13:15:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:00.450 13:15:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:00.450 [2024-04-17 13:15:04.593112] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:00.450 [2024-04-17 13:15:04.593312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143859 ] 00:31:00.709 [2024-04-17 13:15:04.767427] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.968 [2024-04-17 13:15:04.994276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.227 [2024-04-17 13:15:05.311445] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:01.227 [2024-04-17 13:15:05.311540] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:01.227 [2024-04-17 13:15:05.311574] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:02.229 [2024-04-17 13:15:06.071517] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:02.488 13:15:06 -- common/autotest_common.sh@641 -- # es=216 00:31:02.488 13:15:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:02.488 13:15:06 -- common/autotest_common.sh@650 -- # es=88 00:31:02.488 13:15:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:02.488 13:15:06 -- common/autotest_common.sh@658 -- # es=1 00:31:02.488 13:15:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:02.488 13:15:06 -- dd/posix.sh@46 -- # gen_bytes 512 00:31:02.488 13:15:06 -- dd/common.sh@98 -- # xtrace_disable 00:31:02.488 13:15:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.488 13:15:06 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:02.488 [2024-04-17 13:15:06.567576] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:02.488 [2024-04-17 13:15:06.567804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143881 ] 00:31:02.747 [2024-04-17 13:15:06.737944] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.006 [2024-04-17 13:15:06.974870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.639  Copying: 512/512 [B] (average 500 kBps) 00:31:04.639 00:31:04.639 13:15:08 -- dd/posix.sh@49 -- # [[ c56j0tfk87fk1i1oqbp5jyisxgftvxsj0zui1z6mz93kor328a84eiqq5q3zytujw1saqw8gllrq34nggqkgefxueauo9zrn6nu5a6w34dsxg1ziluwov7igxvaoyieea9etwfcvxp8r3u2kja4v2o0pp8aoj85s7hzfly6vfn91deqjdpaylqb7hf84d1549kwo11fhjhhgkw94rjm7cnzg218hchf9a1di0jx9bnp38bskz0keo55a4085ff2yhb20xnd5tknp5rt1g606ed2tirigcirmpe59kyonfzoroyw7j4a73a36siattjzf5f8o6yc4v4epztjkb7ppb5lg9vu4hrax03rx0oaoozvdze3zkn1380cvfl2tvdjx1c65cx2g44o32pp1e56fuz9jk0ugm2a6lq8vejbrn43bibtavmoz0zgm6wyopczi7hz4jtrqps7bwo2olwkst8751r8s9ky2bi37is77jnvvavkpw58w869hl7zewki2 == \c\5\6\j\0\t\f\k\8\7\f\k\1\i\1\o\q\b\p\5\j\y\i\s\x\g\f\t\v\x\s\j\0\z\u\i\1\z\6\m\z\9\3\k\o\r\3\2\8\a\8\4\e\i\q\q\5\q\3\z\y\t\u\j\w\1\s\a\q\w\8\g\l\l\r\q\3\4\n\g\g\q\k\g\e\f\x\u\e\a\u\o\9\z\r\n\6\n\u\5\a\6\w\3\4\d\s\x\g\1\z\i\l\u\w\o\v\7\i\g\x\v\a\o\y\i\e\e\a\9\e\t\w\f\c\v\x\p\8\r\3\u\2\k\j\a\4\v\2\o\0\p\p\8\a\o\j\8\5\s\7\h\z\f\l\y\6\v\f\n\9\1\d\e\q\j\d\p\a\y\l\q\b\7\h\f\8\4\d\1\5\4\9\k\w\o\1\1\f\h\j\h\h\g\k\w\9\4\r\j\m\7\c\n\z\g\2\1\8\h\c\h\f\9\a\1\d\i\0\j\x\9\b\n\p\3\8\b\s\k\z\0\k\e\o\5\5\a\4\0\8\5\f\f\2\y\h\b\2\0\x\n\d\5\t\k\n\p\5\r\t\1\g\6\0\6\e\d\2\t\i\r\i\g\c\i\r\m\p\e\5\9\k\y\o\n\f\z\o\r\o\y\w\7\j\4\a\7\3\a\3\6\s\i\a\t\t\j\z\f\5\f\8\o\6\y\c\4\v\4\e\p\z\t\j\k\b\7\p\p\b\5\l\g\9\v\u\4\h\r\a\x\0\3\r\x\0\o\a\o\o\z\v\d\z\e\3\z\k\n\1\3\8\0\c\v\f\l\2\t\v\d\j\x\1\c\6\5\c\x\2\g\4\4\o\3\2\p\p\1\e\5\6\f\u\z\9\j\k\0\u\g\m\2\a\6\l\q\8\v\e\j\b\r\n\4\3\b\i\b\t\a\v\m\o\z\0\z\g\m\6\w\y\o\p\c\z\i\7\h\z\4\j\t\r\q\p\s\7\b\w\o\2\o\l\w\k\s\t\8\7\5\1\r\8\s\9\k\y\2\b\i\3\7\i\s\7\7\j\n\v\v\a\v\k\p\w\5\8\w\8\6\9\h\l\7\z\e\w\k\i\2 ]] 00:31:04.639 00:31:04.639 real 0m5.968s 00:31:04.639 user 0m4.852s 00:31:04.639 sys 0m0.791s 00:31:04.639 ************************************ 00:31:04.639 END TEST dd_flag_nofollow 00:31:04.639 ************************************ 00:31:04.639 13:15:08 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:04.639 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.639 13:15:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:31:04.639 13:15:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:04.639 13:15:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:04.639 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.639 ************************************ 00:31:04.639 START TEST dd_flag_noatime 00:31:04.639 ************************************ 00:31:04.639 13:15:08 -- common/autotest_common.sh@1099 -- # noatime 00:31:04.639 13:15:08 -- dd/posix.sh@53 -- # local atime_if 00:31:04.639 13:15:08 -- dd/posix.sh@54 -- # local atime_of 00:31:04.639 13:15:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:31:04.639 13:15:08 -- dd/common.sh@98 -- # xtrace_disable 00:31:04.639 13:15:08 -- common/autotest_common.sh@10 -- # set +x 00:31:04.639 13:15:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:04.639 13:15:08 -- dd/posix.sh@60 -- # atime_if=1713359707 00:31:04.639 13:15:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:04.639 13:15:08 -- dd/posix.sh@61 -- # atime_of=1713359708 00:31:04.639 13:15:08 -- dd/posix.sh@66 -- # sleep 1 00:31:05.575 13:15:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:05.575 [2024-04-17 13:15:09.696957] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:05.575 [2024-04-17 13:15:09.697167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143949 ] 00:31:05.834 [2024-04-17 13:15:09.866198] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.092 [2024-04-17 13:15:10.113338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.728  Copying: 512/512 [B] (average 500 kBps) 00:31:07.728 00:31:07.728 13:15:11 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:07.728 13:15:11 -- dd/posix.sh@69 -- # (( atime_if == 1713359707 )) 00:31:07.728 13:15:11 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:07.728 13:15:11 -- dd/posix.sh@70 -- # (( atime_of == 1713359708 )) 00:31:07.728 13:15:11 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:07.728 [2024-04-17 13:15:11.738635] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:07.728 [2024-04-17 13:15:11.738922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143980 ] 00:31:07.986 [2024-04-17 13:15:11.921642] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.243 [2024-04-17 13:15:12.167189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.876  Copying: 512/512 [B] (average 500 kBps) 00:31:09.876 00:31:09.876 13:15:13 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:09.876 13:15:13 -- dd/posix.sh@73 -- # (( atime_if < 1713359712 )) 00:31:09.876 ************************************ 00:31:09.876 END TEST dd_flag_noatime 00:31:09.876 ************************************ 00:31:09.876 00:31:09.876 real 0m5.119s 00:31:09.876 user 0m3.332s 00:31:09.876 sys 0m0.528s 00:31:09.876 13:15:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:09.876 13:15:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.876 13:15:13 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:31:09.876 13:15:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:09.876 13:15:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:09.876 13:15:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.876 ************************************ 00:31:09.876 START TEST dd_flags_misc 00:31:09.876 ************************************ 00:31:09.876 13:15:13 -- common/autotest_common.sh@1099 -- # io 00:31:09.876 13:15:13 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:31:09.876 13:15:13 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:31:09.876 13:15:13 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:31:09.876 13:15:13 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:09.876 13:15:13 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:09.876 13:15:13 -- dd/common.sh@98 -- # xtrace_disable 00:31:09.876 13:15:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.876 13:15:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:09.876 13:15:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:09.876 [2024-04-17 13:15:13.879759] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:09.876 [2024-04-17 13:15:13.880283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144027 ] 00:31:10.134 [2024-04-17 13:15:14.044498] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.392 [2024-04-17 13:15:14.316137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.026  Copying: 512/512 [B] (average 500 kBps) 00:31:12.026 00:31:12.026 13:15:15 -- dd/posix.sh@93 -- # [[ nv3r4ot6mi83rflqkppve69688ryi05v7a8bwc6xzkrwdef08vjzcq5cu1225bq0h4y4tlwuyft35y0xqow70hxt44mejrmqbflm8efly5lr98y8wo44iv6on7wgog788wlzewg644nrwy6r65b823dlur30whqlvx6agawaek4yvdjjd1c06z2ha03intgph869dqgc1mnl0jfluct6od7v66pbojs265embdogdks8c7olfbmwrh8neol7onmrcu1bbibcc9gj3wlx5krkil72rj3m91pa7b60d986ynrggbv7ejttsdx6g0mvgfssdn1etvnnvqo81q8deq2mworid0gwhjqzafn2fi62zn9on9jud4suio0vrx8gs2fywjxopunfmo1vll5ouiapxpy1k1ptopki4dhj99u0hxf3kbcpk2j2prbfk1b7x0f1xupoqdlm8irljy23twkznl31vu2t988byu03fwqfbnvdj5y5uku28j11jxga9cf0 == \n\v\3\r\4\o\t\6\m\i\8\3\r\f\l\q\k\p\p\v\e\6\9\6\8\8\r\y\i\0\5\v\7\a\8\b\w\c\6\x\z\k\r\w\d\e\f\0\8\v\j\z\c\q\5\c\u\1\2\2\5\b\q\0\h\4\y\4\t\l\w\u\y\f\t\3\5\y\0\x\q\o\w\7\0\h\x\t\4\4\m\e\j\r\m\q\b\f\l\m\8\e\f\l\y\5\l\r\9\8\y\8\w\o\4\4\i\v\6\o\n\7\w\g\o\g\7\8\8\w\l\z\e\w\g\6\4\4\n\r\w\y\6\r\6\5\b\8\2\3\d\l\u\r\3\0\w\h\q\l\v\x\6\a\g\a\w\a\e\k\4\y\v\d\j\j\d\1\c\0\6\z\2\h\a\0\3\i\n\t\g\p\h\8\6\9\d\q\g\c\1\m\n\l\0\j\f\l\u\c\t\6\o\d\7\v\6\6\p\b\o\j\s\2\6\5\e\m\b\d\o\g\d\k\s\8\c\7\o\l\f\b\m\w\r\h\8\n\e\o\l\7\o\n\m\r\c\u\1\b\b\i\b\c\c\9\g\j\3\w\l\x\5\k\r\k\i\l\7\2\r\j\3\m\9\1\p\a\7\b\6\0\d\9\8\6\y\n\r\g\g\b\v\7\e\j\t\t\s\d\x\6\g\0\m\v\g\f\s\s\d\n\1\e\t\v\n\n\v\q\o\8\1\q\8\d\e\q\2\m\w\o\r\i\d\0\g\w\h\j\q\z\a\f\n\2\f\i\6\2\z\n\9\o\n\9\j\u\d\4\s\u\i\o\0\v\r\x\8\g\s\2\f\y\w\j\x\o\p\u\n\f\m\o\1\v\l\l\5\o\u\i\a\p\x\p\y\1\k\1\p\t\o\p\k\i\4\d\h\j\9\9\u\0\h\x\f\3\k\b\c\p\k\2\j\2\p\r\b\f\k\1\b\7\x\0\f\1\x\u\p\o\q\d\l\m\8\i\r\l\j\y\2\3\t\w\k\z\n\l\3\1\v\u\2\t\9\8\8\b\y\u\0\3\f\w\q\f\b\n\v\d\j\5\y\5\u\k\u\2\8\j\1\1\j\x\g\a\9\c\f\0 ]] 00:31:12.026 13:15:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:12.026 13:15:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:12.026 [2024-04-17 13:15:16.041857] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:12.026 [2024-04-17 13:15:16.042375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144075 ] 00:31:12.285 [2024-04-17 13:15:16.216402] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.544 [2024-04-17 13:15:16.437088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.178  Copying: 512/512 [B] (average 500 kBps) 00:31:14.178 00:31:14.178 13:15:18 -- dd/posix.sh@93 -- # [[ nv3r4ot6mi83rflqkppve69688ryi05v7a8bwc6xzkrwdef08vjzcq5cu1225bq0h4y4tlwuyft35y0xqow70hxt44mejrmqbflm8efly5lr98y8wo44iv6on7wgog788wlzewg644nrwy6r65b823dlur30whqlvx6agawaek4yvdjjd1c06z2ha03intgph869dqgc1mnl0jfluct6od7v66pbojs265embdogdks8c7olfbmwrh8neol7onmrcu1bbibcc9gj3wlx5krkil72rj3m91pa7b60d986ynrggbv7ejttsdx6g0mvgfssdn1etvnnvqo81q8deq2mworid0gwhjqzafn2fi62zn9on9jud4suio0vrx8gs2fywjxopunfmo1vll5ouiapxpy1k1ptopki4dhj99u0hxf3kbcpk2j2prbfk1b7x0f1xupoqdlm8irljy23twkznl31vu2t988byu03fwqfbnvdj5y5uku28j11jxga9cf0 == \n\v\3\r\4\o\t\6\m\i\8\3\r\f\l\q\k\p\p\v\e\6\9\6\8\8\r\y\i\0\5\v\7\a\8\b\w\c\6\x\z\k\r\w\d\e\f\0\8\v\j\z\c\q\5\c\u\1\2\2\5\b\q\0\h\4\y\4\t\l\w\u\y\f\t\3\5\y\0\x\q\o\w\7\0\h\x\t\4\4\m\e\j\r\m\q\b\f\l\m\8\e\f\l\y\5\l\r\9\8\y\8\w\o\4\4\i\v\6\o\n\7\w\g\o\g\7\8\8\w\l\z\e\w\g\6\4\4\n\r\w\y\6\r\6\5\b\8\2\3\d\l\u\r\3\0\w\h\q\l\v\x\6\a\g\a\w\a\e\k\4\y\v\d\j\j\d\1\c\0\6\z\2\h\a\0\3\i\n\t\g\p\h\8\6\9\d\q\g\c\1\m\n\l\0\j\f\l\u\c\t\6\o\d\7\v\6\6\p\b\o\j\s\2\6\5\e\m\b\d\o\g\d\k\s\8\c\7\o\l\f\b\m\w\r\h\8\n\e\o\l\7\o\n\m\r\c\u\1\b\b\i\b\c\c\9\g\j\3\w\l\x\5\k\r\k\i\l\7\2\r\j\3\m\9\1\p\a\7\b\6\0\d\9\8\6\y\n\r\g\g\b\v\7\e\j\t\t\s\d\x\6\g\0\m\v\g\f\s\s\d\n\1\e\t\v\n\n\v\q\o\8\1\q\8\d\e\q\2\m\w\o\r\i\d\0\g\w\h\j\q\z\a\f\n\2\f\i\6\2\z\n\9\o\n\9\j\u\d\4\s\u\i\o\0\v\r\x\8\g\s\2\f\y\w\j\x\o\p\u\n\f\m\o\1\v\l\l\5\o\u\i\a\p\x\p\y\1\k\1\p\t\o\p\k\i\4\d\h\j\9\9\u\0\h\x\f\3\k\b\c\p\k\2\j\2\p\r\b\f\k\1\b\7\x\0\f\1\x\u\p\o\q\d\l\m\8\i\r\l\j\y\2\3\t\w\k\z\n\l\3\1\v\u\2\t\9\8\8\b\y\u\0\3\f\w\q\f\b\n\v\d\j\5\y\5\u\k\u\2\8\j\1\1\j\x\g\a\9\c\f\0 ]] 00:31:14.178 13:15:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:14.178 13:15:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:14.178 [2024-04-17 13:15:18.157155] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:14.178 [2024-04-17 13:15:18.157683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144100 ] 00:31:14.178 [2024-04-17 13:15:18.317748] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.438 [2024-04-17 13:15:18.540688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.943  Copying: 512/512 [B] (average 166 kBps) 00:31:15.943 00:31:15.943 13:15:20 -- dd/posix.sh@93 -- # [[ nv3r4ot6mi83rflqkppve69688ryi05v7a8bwc6xzkrwdef08vjzcq5cu1225bq0h4y4tlwuyft35y0xqow70hxt44mejrmqbflm8efly5lr98y8wo44iv6on7wgog788wlzewg644nrwy6r65b823dlur30whqlvx6agawaek4yvdjjd1c06z2ha03intgph869dqgc1mnl0jfluct6od7v66pbojs265embdogdks8c7olfbmwrh8neol7onmrcu1bbibcc9gj3wlx5krkil72rj3m91pa7b60d986ynrggbv7ejttsdx6g0mvgfssdn1etvnnvqo81q8deq2mworid0gwhjqzafn2fi62zn9on9jud4suio0vrx8gs2fywjxopunfmo1vll5ouiapxpy1k1ptopki4dhj99u0hxf3kbcpk2j2prbfk1b7x0f1xupoqdlm8irljy23twkznl31vu2t988byu03fwqfbnvdj5y5uku28j11jxga9cf0 == \n\v\3\r\4\o\t\6\m\i\8\3\r\f\l\q\k\p\p\v\e\6\9\6\8\8\r\y\i\0\5\v\7\a\8\b\w\c\6\x\z\k\r\w\d\e\f\0\8\v\j\z\c\q\5\c\u\1\2\2\5\b\q\0\h\4\y\4\t\l\w\u\y\f\t\3\5\y\0\x\q\o\w\7\0\h\x\t\4\4\m\e\j\r\m\q\b\f\l\m\8\e\f\l\y\5\l\r\9\8\y\8\w\o\4\4\i\v\6\o\n\7\w\g\o\g\7\8\8\w\l\z\e\w\g\6\4\4\n\r\w\y\6\r\6\5\b\8\2\3\d\l\u\r\3\0\w\h\q\l\v\x\6\a\g\a\w\a\e\k\4\y\v\d\j\j\d\1\c\0\6\z\2\h\a\0\3\i\n\t\g\p\h\8\6\9\d\q\g\c\1\m\n\l\0\j\f\l\u\c\t\6\o\d\7\v\6\6\p\b\o\j\s\2\6\5\e\m\b\d\o\g\d\k\s\8\c\7\o\l\f\b\m\w\r\h\8\n\e\o\l\7\o\n\m\r\c\u\1\b\b\i\b\c\c\9\g\j\3\w\l\x\5\k\r\k\i\l\7\2\r\j\3\m\9\1\p\a\7\b\6\0\d\9\8\6\y\n\r\g\g\b\v\7\e\j\t\t\s\d\x\6\g\0\m\v\g\f\s\s\d\n\1\e\t\v\n\n\v\q\o\8\1\q\8\d\e\q\2\m\w\o\r\i\d\0\g\w\h\j\q\z\a\f\n\2\f\i\6\2\z\n\9\o\n\9\j\u\d\4\s\u\i\o\0\v\r\x\8\g\s\2\f\y\w\j\x\o\p\u\n\f\m\o\1\v\l\l\5\o\u\i\a\p\x\p\y\1\k\1\p\t\o\p\k\i\4\d\h\j\9\9\u\0\h\x\f\3\k\b\c\p\k\2\j\2\p\r\b\f\k\1\b\7\x\0\f\1\x\u\p\o\q\d\l\m\8\i\r\l\j\y\2\3\t\w\k\z\n\l\3\1\v\u\2\t\9\8\8\b\y\u\0\3\f\w\q\f\b\n\v\d\j\5\y\5\u\k\u\2\8\j\1\1\j\x\g\a\9\c\f\0 ]] 00:31:15.943 13:15:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:15.943 13:15:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:15.943 [2024-04-17 13:15:20.088672] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:15.943 [2024-04-17 13:15:20.089058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144128 ] 00:31:16.202 [2024-04-17 13:15:20.248825] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.461 [2024-04-17 13:15:20.465356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.096  Copying: 512/512 [B] (average 7420 Bps) 00:31:18.096 00:31:18.097 13:15:22 -- dd/posix.sh@93 -- # [[ nv3r4ot6mi83rflqkppve69688ryi05v7a8bwc6xzkrwdef08vjzcq5cu1225bq0h4y4tlwuyft35y0xqow70hxt44mejrmqbflm8efly5lr98y8wo44iv6on7wgog788wlzewg644nrwy6r65b823dlur30whqlvx6agawaek4yvdjjd1c06z2ha03intgph869dqgc1mnl0jfluct6od7v66pbojs265embdogdks8c7olfbmwrh8neol7onmrcu1bbibcc9gj3wlx5krkil72rj3m91pa7b60d986ynrggbv7ejttsdx6g0mvgfssdn1etvnnvqo81q8deq2mworid0gwhjqzafn2fi62zn9on9jud4suio0vrx8gs2fywjxopunfmo1vll5ouiapxpy1k1ptopki4dhj99u0hxf3kbcpk2j2prbfk1b7x0f1xupoqdlm8irljy23twkznl31vu2t988byu03fwqfbnvdj5y5uku28j11jxga9cf0 == \n\v\3\r\4\o\t\6\m\i\8\3\r\f\l\q\k\p\p\v\e\6\9\6\8\8\r\y\i\0\5\v\7\a\8\b\w\c\6\x\z\k\r\w\d\e\f\0\8\v\j\z\c\q\5\c\u\1\2\2\5\b\q\0\h\4\y\4\t\l\w\u\y\f\t\3\5\y\0\x\q\o\w\7\0\h\x\t\4\4\m\e\j\r\m\q\b\f\l\m\8\e\f\l\y\5\l\r\9\8\y\8\w\o\4\4\i\v\6\o\n\7\w\g\o\g\7\8\8\w\l\z\e\w\g\6\4\4\n\r\w\y\6\r\6\5\b\8\2\3\d\l\u\r\3\0\w\h\q\l\v\x\6\a\g\a\w\a\e\k\4\y\v\d\j\j\d\1\c\0\6\z\2\h\a\0\3\i\n\t\g\p\h\8\6\9\d\q\g\c\1\m\n\l\0\j\f\l\u\c\t\6\o\d\7\v\6\6\p\b\o\j\s\2\6\5\e\m\b\d\o\g\d\k\s\8\c\7\o\l\f\b\m\w\r\h\8\n\e\o\l\7\o\n\m\r\c\u\1\b\b\i\b\c\c\9\g\j\3\w\l\x\5\k\r\k\i\l\7\2\r\j\3\m\9\1\p\a\7\b\6\0\d\9\8\6\y\n\r\g\g\b\v\7\e\j\t\t\s\d\x\6\g\0\m\v\g\f\s\s\d\n\1\e\t\v\n\n\v\q\o\8\1\q\8\d\e\q\2\m\w\o\r\i\d\0\g\w\h\j\q\z\a\f\n\2\f\i\6\2\z\n\9\o\n\9\j\u\d\4\s\u\i\o\0\v\r\x\8\g\s\2\f\y\w\j\x\o\p\u\n\f\m\o\1\v\l\l\5\o\u\i\a\p\x\p\y\1\k\1\p\t\o\p\k\i\4\d\h\j\9\9\u\0\h\x\f\3\k\b\c\p\k\2\j\2\p\r\b\f\k\1\b\7\x\0\f\1\x\u\p\o\q\d\l\m\8\i\r\l\j\y\2\3\t\w\k\z\n\l\3\1\v\u\2\t\9\8\8\b\y\u\0\3\f\w\q\f\b\n\v\d\j\5\y\5\u\k\u\2\8\j\1\1\j\x\g\a\9\c\f\0 ]] 00:31:18.097 13:15:22 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:18.097 13:15:22 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:18.097 13:15:22 -- dd/common.sh@98 -- # xtrace_disable 00:31:18.097 13:15:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.097 13:15:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:18.097 13:15:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:18.097 [2024-04-17 13:15:22.118396] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:18.097 [2024-04-17 13:15:22.118803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144153 ] 00:31:18.356 [2024-04-17 13:15:22.288700] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.356 [2024-04-17 13:15:22.488989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.858  Copying: 512/512 [B] (average 500 kBps) 00:31:19.858 00:31:20.116 13:15:24 -- dd/posix.sh@93 -- # [[ 2t1tkxae5j17hscalhu0pvl40pin4onx28c5kulr5gmh1x2mhufx7w0qct3ekwo6lnx8mkxfjkowu3wnq0alnhm8kus8fj0k15dff9uw2cn02xurrutxbu4j0jp6ovkn3daz8mhfwsdxg00nc5wbh5z19rod86t7w2ksyrnqupxdirxm402zxelukgxbi7il5cd413bbbqn8uo9zdvxv1grz61efon8jk4o7f4jmn2ow61nb7wtvrjx9o7wakl5ojy01hfyqip653ql6g1waapc9il1m62ro109ry0sf69ribyt1rhrbx6pynmrqg97ez3ofncvpvvl45m13mwx0iqccegchyzpu18tcbbaiufzk0z17gbfn8yzvxq7ek966wh4ss1qm3gdczb8jrjsv0gr72awinr4x8o0af9r8nl72shrlszxq9qkjzr5i9lo20grrh7rpk0oaksex76qwairn7n180tnw4zxx6valw4wxmyn29d9jayg00tpxz9yo == \2\t\1\t\k\x\a\e\5\j\1\7\h\s\c\a\l\h\u\0\p\v\l\4\0\p\i\n\4\o\n\x\2\8\c\5\k\u\l\r\5\g\m\h\1\x\2\m\h\u\f\x\7\w\0\q\c\t\3\e\k\w\o\6\l\n\x\8\m\k\x\f\j\k\o\w\u\3\w\n\q\0\a\l\n\h\m\8\k\u\s\8\f\j\0\k\1\5\d\f\f\9\u\w\2\c\n\0\2\x\u\r\r\u\t\x\b\u\4\j\0\j\p\6\o\v\k\n\3\d\a\z\8\m\h\f\w\s\d\x\g\0\0\n\c\5\w\b\h\5\z\1\9\r\o\d\8\6\t\7\w\2\k\s\y\r\n\q\u\p\x\d\i\r\x\m\4\0\2\z\x\e\l\u\k\g\x\b\i\7\i\l\5\c\d\4\1\3\b\b\b\q\n\8\u\o\9\z\d\v\x\v\1\g\r\z\6\1\e\f\o\n\8\j\k\4\o\7\f\4\j\m\n\2\o\w\6\1\n\b\7\w\t\v\r\j\x\9\o\7\w\a\k\l\5\o\j\y\0\1\h\f\y\q\i\p\6\5\3\q\l\6\g\1\w\a\a\p\c\9\i\l\1\m\6\2\r\o\1\0\9\r\y\0\s\f\6\9\r\i\b\y\t\1\r\h\r\b\x\6\p\y\n\m\r\q\g\9\7\e\z\3\o\f\n\c\v\p\v\v\l\4\5\m\1\3\m\w\x\0\i\q\c\c\e\g\c\h\y\z\p\u\1\8\t\c\b\b\a\i\u\f\z\k\0\z\1\7\g\b\f\n\8\y\z\v\x\q\7\e\k\9\6\6\w\h\4\s\s\1\q\m\3\g\d\c\z\b\8\j\r\j\s\v\0\g\r\7\2\a\w\i\n\r\4\x\8\o\0\a\f\9\r\8\n\l\7\2\s\h\r\l\s\z\x\q\9\q\k\j\z\r\5\i\9\l\o\2\0\g\r\r\h\7\r\p\k\0\o\a\k\s\e\x\7\6\q\w\a\i\r\n\7\n\1\8\0\t\n\w\4\z\x\x\6\v\a\l\w\4\w\x\m\y\n\2\9\d\9\j\a\y\g\0\0\t\p\x\z\9\y\o ]] 00:31:20.116 13:15:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:20.116 13:15:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:20.116 [2024-04-17 13:15:24.070064] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:20.116 [2024-04-17 13:15:24.070477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144181 ] 00:31:20.116 [2024-04-17 13:15:24.238705] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.373 [2024-04-17 13:15:24.453290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.031  Copying: 512/512 [B] (average 500 kBps) 00:31:22.031 00:31:22.031 13:15:25 -- dd/posix.sh@93 -- # [[ 2t1tkxae5j17hscalhu0pvl40pin4onx28c5kulr5gmh1x2mhufx7w0qct3ekwo6lnx8mkxfjkowu3wnq0alnhm8kus8fj0k15dff9uw2cn02xurrutxbu4j0jp6ovkn3daz8mhfwsdxg00nc5wbh5z19rod86t7w2ksyrnqupxdirxm402zxelukgxbi7il5cd413bbbqn8uo9zdvxv1grz61efon8jk4o7f4jmn2ow61nb7wtvrjx9o7wakl5ojy01hfyqip653ql6g1waapc9il1m62ro109ry0sf69ribyt1rhrbx6pynmrqg97ez3ofncvpvvl45m13mwx0iqccegchyzpu18tcbbaiufzk0z17gbfn8yzvxq7ek966wh4ss1qm3gdczb8jrjsv0gr72awinr4x8o0af9r8nl72shrlszxq9qkjzr5i9lo20grrh7rpk0oaksex76qwairn7n180tnw4zxx6valw4wxmyn29d9jayg00tpxz9yo == \2\t\1\t\k\x\a\e\5\j\1\7\h\s\c\a\l\h\u\0\p\v\l\4\0\p\i\n\4\o\n\x\2\8\c\5\k\u\l\r\5\g\m\h\1\x\2\m\h\u\f\x\7\w\0\q\c\t\3\e\k\w\o\6\l\n\x\8\m\k\x\f\j\k\o\w\u\3\w\n\q\0\a\l\n\h\m\8\k\u\s\8\f\j\0\k\1\5\d\f\f\9\u\w\2\c\n\0\2\x\u\r\r\u\t\x\b\u\4\j\0\j\p\6\o\v\k\n\3\d\a\z\8\m\h\f\w\s\d\x\g\0\0\n\c\5\w\b\h\5\z\1\9\r\o\d\8\6\t\7\w\2\k\s\y\r\n\q\u\p\x\d\i\r\x\m\4\0\2\z\x\e\l\u\k\g\x\b\i\7\i\l\5\c\d\4\1\3\b\b\b\q\n\8\u\o\9\z\d\v\x\v\1\g\r\z\6\1\e\f\o\n\8\j\k\4\o\7\f\4\j\m\n\2\o\w\6\1\n\b\7\w\t\v\r\j\x\9\o\7\w\a\k\l\5\o\j\y\0\1\h\f\y\q\i\p\6\5\3\q\l\6\g\1\w\a\a\p\c\9\i\l\1\m\6\2\r\o\1\0\9\r\y\0\s\f\6\9\r\i\b\y\t\1\r\h\r\b\x\6\p\y\n\m\r\q\g\9\7\e\z\3\o\f\n\c\v\p\v\v\l\4\5\m\1\3\m\w\x\0\i\q\c\c\e\g\c\h\y\z\p\u\1\8\t\c\b\b\a\i\u\f\z\k\0\z\1\7\g\b\f\n\8\y\z\v\x\q\7\e\k\9\6\6\w\h\4\s\s\1\q\m\3\g\d\c\z\b\8\j\r\j\s\v\0\g\r\7\2\a\w\i\n\r\4\x\8\o\0\a\f\9\r\8\n\l\7\2\s\h\r\l\s\z\x\q\9\q\k\j\z\r\5\i\9\l\o\2\0\g\r\r\h\7\r\p\k\0\o\a\k\s\e\x\7\6\q\w\a\i\r\n\7\n\1\8\0\t\n\w\4\z\x\x\6\v\a\l\w\4\w\x\m\y\n\2\9\d\9\j\a\y\g\0\0\t\p\x\z\9\y\o ]] 00:31:22.031 13:15:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:22.031 13:15:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:22.031 [2024-04-17 13:15:25.966118] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:22.031 [2024-04-17 13:15:25.966752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144229 ] 00:31:22.031 [2024-04-17 13:15:26.136592] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.289 [2024-04-17 13:15:26.348260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.921  Copying: 512/512 [B] (average 250 kBps) 00:31:23.921 00:31:23.921 13:15:27 -- dd/posix.sh@93 -- # [[ 2t1tkxae5j17hscalhu0pvl40pin4onx28c5kulr5gmh1x2mhufx7w0qct3ekwo6lnx8mkxfjkowu3wnq0alnhm8kus8fj0k15dff9uw2cn02xurrutxbu4j0jp6ovkn3daz8mhfwsdxg00nc5wbh5z19rod86t7w2ksyrnqupxdirxm402zxelukgxbi7il5cd413bbbqn8uo9zdvxv1grz61efon8jk4o7f4jmn2ow61nb7wtvrjx9o7wakl5ojy01hfyqip653ql6g1waapc9il1m62ro109ry0sf69ribyt1rhrbx6pynmrqg97ez3ofncvpvvl45m13mwx0iqccegchyzpu18tcbbaiufzk0z17gbfn8yzvxq7ek966wh4ss1qm3gdczb8jrjsv0gr72awinr4x8o0af9r8nl72shrlszxq9qkjzr5i9lo20grrh7rpk0oaksex76qwairn7n180tnw4zxx6valw4wxmyn29d9jayg00tpxz9yo == \2\t\1\t\k\x\a\e\5\j\1\7\h\s\c\a\l\h\u\0\p\v\l\4\0\p\i\n\4\o\n\x\2\8\c\5\k\u\l\r\5\g\m\h\1\x\2\m\h\u\f\x\7\w\0\q\c\t\3\e\k\w\o\6\l\n\x\8\m\k\x\f\j\k\o\w\u\3\w\n\q\0\a\l\n\h\m\8\k\u\s\8\f\j\0\k\1\5\d\f\f\9\u\w\2\c\n\0\2\x\u\r\r\u\t\x\b\u\4\j\0\j\p\6\o\v\k\n\3\d\a\z\8\m\h\f\w\s\d\x\g\0\0\n\c\5\w\b\h\5\z\1\9\r\o\d\8\6\t\7\w\2\k\s\y\r\n\q\u\p\x\d\i\r\x\m\4\0\2\z\x\e\l\u\k\g\x\b\i\7\i\l\5\c\d\4\1\3\b\b\b\q\n\8\u\o\9\z\d\v\x\v\1\g\r\z\6\1\e\f\o\n\8\j\k\4\o\7\f\4\j\m\n\2\o\w\6\1\n\b\7\w\t\v\r\j\x\9\o\7\w\a\k\l\5\o\j\y\0\1\h\f\y\q\i\p\6\5\3\q\l\6\g\1\w\a\a\p\c\9\i\l\1\m\6\2\r\o\1\0\9\r\y\0\s\f\6\9\r\i\b\y\t\1\r\h\r\b\x\6\p\y\n\m\r\q\g\9\7\e\z\3\o\f\n\c\v\p\v\v\l\4\5\m\1\3\m\w\x\0\i\q\c\c\e\g\c\h\y\z\p\u\1\8\t\c\b\b\a\i\u\f\z\k\0\z\1\7\g\b\f\n\8\y\z\v\x\q\7\e\k\9\6\6\w\h\4\s\s\1\q\m\3\g\d\c\z\b\8\j\r\j\s\v\0\g\r\7\2\a\w\i\n\r\4\x\8\o\0\a\f\9\r\8\n\l\7\2\s\h\r\l\s\z\x\q\9\q\k\j\z\r\5\i\9\l\o\2\0\g\r\r\h\7\r\p\k\0\o\a\k\s\e\x\7\6\q\w\a\i\r\n\7\n\1\8\0\t\n\w\4\z\x\x\6\v\a\l\w\4\w\x\m\y\n\2\9\d\9\j\a\y\g\0\0\t\p\x\z\9\y\o ]] 00:31:23.921 13:15:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:23.921 13:15:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:23.921 [2024-04-17 13:15:27.885531] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:23.921 [2024-04-17 13:15:27.885885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144253 ] 00:31:23.921 [2024-04-17 13:15:28.046398] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.487 [2024-04-17 13:15:28.341503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.120  Copying: 512/512 [B] (average 250 kBps) 00:31:26.120 00:31:26.120 ************************************ 00:31:26.120 END TEST dd_flags_misc 00:31:26.120 ************************************ 00:31:26.120 13:15:29 -- dd/posix.sh@93 -- # [[ 2t1tkxae5j17hscalhu0pvl40pin4onx28c5kulr5gmh1x2mhufx7w0qct3ekwo6lnx8mkxfjkowu3wnq0alnhm8kus8fj0k15dff9uw2cn02xurrutxbu4j0jp6ovkn3daz8mhfwsdxg00nc5wbh5z19rod86t7w2ksyrnqupxdirxm402zxelukgxbi7il5cd413bbbqn8uo9zdvxv1grz61efon8jk4o7f4jmn2ow61nb7wtvrjx9o7wakl5ojy01hfyqip653ql6g1waapc9il1m62ro109ry0sf69ribyt1rhrbx6pynmrqg97ez3ofncvpvvl45m13mwx0iqccegchyzpu18tcbbaiufzk0z17gbfn8yzvxq7ek966wh4ss1qm3gdczb8jrjsv0gr72awinr4x8o0af9r8nl72shrlszxq9qkjzr5i9lo20grrh7rpk0oaksex76qwairn7n180tnw4zxx6valw4wxmyn29d9jayg00tpxz9yo == \2\t\1\t\k\x\a\e\5\j\1\7\h\s\c\a\l\h\u\0\p\v\l\4\0\p\i\n\4\o\n\x\2\8\c\5\k\u\l\r\5\g\m\h\1\x\2\m\h\u\f\x\7\w\0\q\c\t\3\e\k\w\o\6\l\n\x\8\m\k\x\f\j\k\o\w\u\3\w\n\q\0\a\l\n\h\m\8\k\u\s\8\f\j\0\k\1\5\d\f\f\9\u\w\2\c\n\0\2\x\u\r\r\u\t\x\b\u\4\j\0\j\p\6\o\v\k\n\3\d\a\z\8\m\h\f\w\s\d\x\g\0\0\n\c\5\w\b\h\5\z\1\9\r\o\d\8\6\t\7\w\2\k\s\y\r\n\q\u\p\x\d\i\r\x\m\4\0\2\z\x\e\l\u\k\g\x\b\i\7\i\l\5\c\d\4\1\3\b\b\b\q\n\8\u\o\9\z\d\v\x\v\1\g\r\z\6\1\e\f\o\n\8\j\k\4\o\7\f\4\j\m\n\2\o\w\6\1\n\b\7\w\t\v\r\j\x\9\o\7\w\a\k\l\5\o\j\y\0\1\h\f\y\q\i\p\6\5\3\q\l\6\g\1\w\a\a\p\c\9\i\l\1\m\6\2\r\o\1\0\9\r\y\0\s\f\6\9\r\i\b\y\t\1\r\h\r\b\x\6\p\y\n\m\r\q\g\9\7\e\z\3\o\f\n\c\v\p\v\v\l\4\5\m\1\3\m\w\x\0\i\q\c\c\e\g\c\h\y\z\p\u\1\8\t\c\b\b\a\i\u\f\z\k\0\z\1\7\g\b\f\n\8\y\z\v\x\q\7\e\k\9\6\6\w\h\4\s\s\1\q\m\3\g\d\c\z\b\8\j\r\j\s\v\0\g\r\7\2\a\w\i\n\r\4\x\8\o\0\a\f\9\r\8\n\l\7\2\s\h\r\l\s\z\x\q\9\q\k\j\z\r\5\i\9\l\o\2\0\g\r\r\h\7\r\p\k\0\o\a\k\s\e\x\7\6\q\w\a\i\r\n\7\n\1\8\0\t\n\w\4\z\x\x\6\v\a\l\w\4\w\x\m\y\n\2\9\d\9\j\a\y\g\0\0\t\p\x\z\9\y\o ]] 00:31:26.120 00:31:26.120 real 0m16.036s 00:31:26.120 user 0m13.018s 00:31:26.120 sys 0m1.859s 00:31:26.120 13:15:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:26.120 13:15:29 -- common/autotest_common.sh@10 -- # set +x 00:31:26.120 13:15:29 -- dd/posix.sh@131 -- # tests_forced_aio 00:31:26.121 13:15:29 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:31:26.121 * Second test run, using AIO 00:31:26.121 13:15:29 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:31:26.121 13:15:29 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:31:26.121 13:15:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:26.121 13:15:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:26.121 13:15:29 -- common/autotest_common.sh@10 -- # set +x 00:31:26.121 ************************************ 00:31:26.121 START TEST dd_flag_append_forced_aio 00:31:26.121 ************************************ 00:31:26.121 13:15:29 -- common/autotest_common.sh@1099 -- # append 00:31:26.121 13:15:29 -- dd/posix.sh@16 -- # local dump0 00:31:26.121 13:15:29 -- dd/posix.sh@17 -- # local dump1 00:31:26.121 13:15:29 -- dd/posix.sh@19 -- # gen_bytes 32 00:31:26.121 13:15:29 -- dd/common.sh@98 -- # xtrace_disable 00:31:26.121 13:15:29 -- common/autotest_common.sh@10 -- # set +x 00:31:26.121 13:15:29 -- dd/posix.sh@19 -- # dump0=9gmk1sgcirw84dv6kgozmsj7rjm5ttla 00:31:26.121 13:15:29 -- dd/posix.sh@20 -- # gen_bytes 32 00:31:26.121 13:15:29 -- dd/common.sh@98 -- # xtrace_disable 00:31:26.121 13:15:29 -- common/autotest_common.sh@10 -- # set +x 00:31:26.121 13:15:29 -- dd/posix.sh@20 -- # dump1=6auad20jbo3joi2taof7zsu9iccejodh 00:31:26.121 13:15:29 -- dd/posix.sh@22 -- # printf %s 9gmk1sgcirw84dv6kgozmsj7rjm5ttla 00:31:26.121 13:15:29 -- dd/posix.sh@23 -- # printf %s 6auad20jbo3joi2taof7zsu9iccejodh 00:31:26.121 13:15:29 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:31:26.121 [2024-04-17 13:15:29.997190] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:26.121 [2024-04-17 13:15:29.997536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144308 ] 00:31:26.121 [2024-04-17 13:15:30.165430] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.404 [2024-04-17 13:15:30.383034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.034  Copying: 32/32 [B] (average 31 kBps) 00:31:28.034 00:31:28.034 ************************************ 00:31:28.034 END TEST dd_flag_append_forced_aio 00:31:28.034 ************************************ 00:31:28.034 13:15:31 -- dd/posix.sh@27 -- # [[ 6auad20jbo3joi2taof7zsu9iccejodh9gmk1sgcirw84dv6kgozmsj7rjm5ttla == \6\a\u\a\d\2\0\j\b\o\3\j\o\i\2\t\a\o\f\7\z\s\u\9\i\c\c\e\j\o\d\h\9\g\m\k\1\s\g\c\i\r\w\8\4\d\v\6\k\g\o\z\m\s\j\7\r\j\m\5\t\t\l\a ]] 00:31:28.034 00:31:28.034 real 0m1.909s 00:31:28.034 user 0m1.554s 00:31:28.034 sys 0m0.227s 00:31:28.034 13:15:31 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:28.035 13:15:31 -- common/autotest_common.sh@10 -- # set +x 00:31:28.035 13:15:31 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:31:28.035 13:15:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:28.035 13:15:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:28.035 13:15:31 -- common/autotest_common.sh@10 -- # set +x 00:31:28.035 ************************************ 00:31:28.035 START TEST dd_flag_directory_forced_aio 00:31:28.035 ************************************ 00:31:28.035 13:15:31 -- common/autotest_common.sh@1099 -- # directory 00:31:28.035 13:15:31 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:28.035 13:15:31 -- common/autotest_common.sh@638 -- # local es=0 00:31:28.035 13:15:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:28.035 13:15:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:28.035 13:15:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:28.035 13:15:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:28.035 13:15:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:28.035 13:15:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:28.035 13:15:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:28.035 13:15:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:28.035 13:15:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:28.035 13:15:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:28.035 [2024-04-17 13:15:31.985694] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:28.035 [2024-04-17 13:15:31.986115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144359 ] 00:31:28.035 [2024-04-17 13:15:32.154426] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.292 [2024-04-17 13:15:32.418840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.858 [2024-04-17 13:15:32.746069] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:28.858 [2024-04-17 13:15:32.746316] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:28.858 [2024-04-17 13:15:32.746481] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:29.424 [2024-04-17 13:15:33.515803] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:29.990 13:15:33 -- common/autotest_common.sh@641 -- # es=236 00:31:29.990 13:15:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:29.990 13:15:33 -- common/autotest_common.sh@650 -- # es=108 00:31:29.990 13:15:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:29.990 13:15:33 -- common/autotest_common.sh@658 -- # es=1 00:31:29.990 13:15:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:29.990 13:15:33 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:29.990 13:15:33 -- common/autotest_common.sh@638 -- # local es=0 00:31:29.990 13:15:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:29.990 13:15:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:29.990 13:15:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:29.990 13:15:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:29.990 13:15:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:29.990 13:15:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:29.990 13:15:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:29.990 13:15:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:29.990 13:15:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:29.990 13:15:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:29.990 [2024-04-17 13:15:34.016140] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:29.990 [2024-04-17 13:15:34.016594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144388 ] 00:31:30.248 [2024-04-17 13:15:34.185504] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.506 [2024-04-17 13:15:34.441792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.764 [2024-04-17 13:15:34.789606] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:30.764 [2024-04-17 13:15:34.789921] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:30.764 [2024-04-17 13:15:34.790095] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:31.699 [2024-04-17 13:15:35.513768] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:31.958 ************************************ 00:31:31.958 END TEST dd_flag_directory_forced_aio 00:31:31.958 ************************************ 00:31:31.958 13:15:35 -- common/autotest_common.sh@641 -- # es=236 00:31:31.958 13:15:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:31.958 13:15:35 -- common/autotest_common.sh@650 -- # es=108 00:31:31.958 13:15:35 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:31.958 13:15:35 -- common/autotest_common.sh@658 -- # es=1 00:31:31.958 13:15:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:31.958 00:31:31.958 real 0m3.997s 00:31:31.958 user 0m3.329s 00:31:31.958 sys 0m0.462s 00:31:31.958 13:15:35 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:31.958 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.958 13:15:35 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:31:31.958 13:15:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:31.958 13:15:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:31.958 13:15:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.958 ************************************ 00:31:31.958 START TEST dd_flag_nofollow_forced_aio 00:31:31.958 ************************************ 00:31:31.958 13:15:35 -- common/autotest_common.sh@1099 -- # nofollow 00:31:31.958 13:15:35 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:31.958 13:15:35 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:31.958 13:15:35 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:31.958 13:15:35 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:31.958 13:15:35 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:31.958 13:15:35 -- common/autotest_common.sh@638 -- # local es=0 00:31:31.958 13:15:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:31.958 13:15:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.958 13:15:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:31.958 13:15:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.958 13:15:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:31.958 13:15:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.958 13:15:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:31.958 13:15:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:31.958 13:15:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:31.958 13:15:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:31.958 [2024-04-17 13:15:36.062706] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:31.958 [2024-04-17 13:15:36.063199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144456 ] 00:31:32.216 [2024-04-17 13:15:36.238822] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.474 [2024-04-17 13:15:36.484694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.733 [2024-04-17 13:15:36.788028] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:32.733 [2024-04-17 13:15:36.789207] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:32.733 [2024-04-17 13:15:36.789274] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:33.668 [2024-04-17 13:15:37.567672] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:33.926 13:15:37 -- common/autotest_common.sh@641 -- # es=216 00:31:33.926 13:15:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:33.926 13:15:37 -- common/autotest_common.sh@650 -- # es=88 00:31:33.927 13:15:37 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:33.927 13:15:37 -- common/autotest_common.sh@658 -- # es=1 00:31:33.927 13:15:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:33.927 13:15:37 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:33.927 13:15:37 -- common/autotest_common.sh@638 -- # local es=0 00:31:33.927 13:15:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:33.927 13:15:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:33.927 13:15:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:33.927 13:15:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:33.927 13:15:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:33.927 13:15:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:33.927 13:15:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:33.927 13:15:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:33.927 13:15:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:33.927 13:15:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:33.927 [2024-04-17 13:15:38.013726] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:33.927 [2024-04-17 13:15:38.013898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144484 ] 00:31:34.185 [2024-04-17 13:15:38.173733] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.444 [2024-04-17 13:15:38.384883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.703 [2024-04-17 13:15:38.760355] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:34.703 [2024-04-17 13:15:38.760668] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:34.703 [2024-04-17 13:15:38.760747] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:35.637 [2024-04-17 13:15:39.515476] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:31:35.896 13:15:39 -- common/autotest_common.sh@641 -- # es=216 00:31:35.896 13:15:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:35.896 13:15:39 -- common/autotest_common.sh@650 -- # es=88 00:31:35.896 13:15:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:35.896 13:15:39 -- common/autotest_common.sh@658 -- # es=1 00:31:35.896 13:15:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:35.896 13:15:39 -- dd/posix.sh@46 -- # gen_bytes 512 00:31:35.896 13:15:39 -- dd/common.sh@98 -- # xtrace_disable 00:31:35.896 13:15:39 -- common/autotest_common.sh@10 -- # set +x 00:31:35.896 13:15:39 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:35.896 [2024-04-17 13:15:39.987115] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:35.896 [2024-04-17 13:15:39.987339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144510 ] 00:31:36.155 [2024-04-17 13:15:40.154438] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.412 [2024-04-17 13:15:40.361836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.046  Copying: 512/512 [B] (average 500 kBps) 00:31:38.046 00:31:38.046 ************************************ 00:31:38.046 END TEST dd_flag_nofollow_forced_aio 00:31:38.046 ************************************ 00:31:38.046 13:15:41 -- dd/posix.sh@49 -- # [[ gnr0psfuiuyhibl41f59mdq55seftqbbtyxbmn7x8iju8cktttiba63bzwn0zlphjb7qbpjj63mci55k86u6ih8od27nf0xkijrm070hp4ykcixdyk54xnx5n39tbnwb7locyxcdjho0kgzmbgfsd05hpvymeuyra24cqu3qu0b9u0blfh407c4o92vaune60lohoynej8l7xcn0ij5khn748zxeyiki721lpj4eneus7txtm0v1g3brx6nn44plstefivxt7kqu8682il66bp0tzd9o5t75y1uc8bnnylnwk5mm03gy7bi3a0rnn6oesc8dyvh56zfl01da2251l073tfg7ggard06wrn67c9kptc82thtsss915slp1zvxpfdywg41ntzt0wewxgeudn6z3coe7l8ihh378e3xi93ms7aftq9h6lkt6isxfdrtz437jd6od18bqroi7w6jmetts48x2efrlav2uciyp3hwh6isr1u8t5tn298phsp6 == \g\n\r\0\p\s\f\u\i\u\y\h\i\b\l\4\1\f\5\9\m\d\q\5\5\s\e\f\t\q\b\b\t\y\x\b\m\n\7\x\8\i\j\u\8\c\k\t\t\t\i\b\a\6\3\b\z\w\n\0\z\l\p\h\j\b\7\q\b\p\j\j\6\3\m\c\i\5\5\k\8\6\u\6\i\h\8\o\d\2\7\n\f\0\x\k\i\j\r\m\0\7\0\h\p\4\y\k\c\i\x\d\y\k\5\4\x\n\x\5\n\3\9\t\b\n\w\b\7\l\o\c\y\x\c\d\j\h\o\0\k\g\z\m\b\g\f\s\d\0\5\h\p\v\y\m\e\u\y\r\a\2\4\c\q\u\3\q\u\0\b\9\u\0\b\l\f\h\4\0\7\c\4\o\9\2\v\a\u\n\e\6\0\l\o\h\o\y\n\e\j\8\l\7\x\c\n\0\i\j\5\k\h\n\7\4\8\z\x\e\y\i\k\i\7\2\1\l\p\j\4\e\n\e\u\s\7\t\x\t\m\0\v\1\g\3\b\r\x\6\n\n\4\4\p\l\s\t\e\f\i\v\x\t\7\k\q\u\8\6\8\2\i\l\6\6\b\p\0\t\z\d\9\o\5\t\7\5\y\1\u\c\8\b\n\n\y\l\n\w\k\5\m\m\0\3\g\y\7\b\i\3\a\0\r\n\n\6\o\e\s\c\8\d\y\v\h\5\6\z\f\l\0\1\d\a\2\2\5\1\l\0\7\3\t\f\g\7\g\g\a\r\d\0\6\w\r\n\6\7\c\9\k\p\t\c\8\2\t\h\t\s\s\s\9\1\5\s\l\p\1\z\v\x\p\f\d\y\w\g\4\1\n\t\z\t\0\w\e\w\x\g\e\u\d\n\6\z\3\c\o\e\7\l\8\i\h\h\3\7\8\e\3\x\i\9\3\m\s\7\a\f\t\q\9\h\6\l\k\t\6\i\s\x\f\d\r\t\z\4\3\7\j\d\6\o\d\1\8\b\q\r\o\i\7\w\6\j\m\e\t\t\s\4\8\x\2\e\f\r\l\a\v\2\u\c\i\y\p\3\h\w\h\6\i\s\r\1\u\8\t\5\t\n\2\9\8\p\h\s\p\6 ]] 00:31:38.046 00:31:38.046 real 0m5.823s 00:31:38.046 user 0m4.819s 00:31:38.046 sys 0m0.665s 00:31:38.046 13:15:41 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:38.046 13:15:41 -- common/autotest_common.sh@10 -- # set +x 00:31:38.046 13:15:41 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:31:38.046 13:15:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:38.046 13:15:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:38.046 13:15:41 -- common/autotest_common.sh@10 -- # set +x 00:31:38.046 ************************************ 00:31:38.046 START TEST dd_flag_noatime_forced_aio 00:31:38.046 ************************************ 00:31:38.046 13:15:41 -- common/autotest_common.sh@1099 -- # noatime 00:31:38.046 13:15:41 -- dd/posix.sh@53 -- # local atime_if 00:31:38.046 13:15:41 -- dd/posix.sh@54 -- # local atime_of 00:31:38.046 13:15:41 -- dd/posix.sh@58 -- # gen_bytes 512 00:31:38.046 13:15:41 -- dd/common.sh@98 -- # xtrace_disable 00:31:38.046 13:15:41 -- common/autotest_common.sh@10 -- # set +x 00:31:38.046 13:15:41 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:38.046 13:15:41 -- dd/posix.sh@60 -- # atime_if=1713359740 00:31:38.047 13:15:41 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:38.047 13:15:41 -- dd/posix.sh@61 -- # atime_of=1713359741 00:31:38.047 13:15:41 -- dd/posix.sh@66 -- # sleep 1 00:31:38.982 13:15:42 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:38.982 [2024-04-17 13:15:42.952833] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:38.982 [2024-04-17 13:15:42.953049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144579 ] 00:31:38.982 [2024-04-17 13:15:43.119711] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.241 [2024-04-17 13:15:43.327099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.875  Copying: 512/512 [B] (average 500 kBps) 00:31:40.875 00:31:40.875 13:15:44 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:40.875 13:15:44 -- dd/posix.sh@69 -- # (( atime_if == 1713359740 )) 00:31:40.875 13:15:44 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:40.875 13:15:44 -- dd/posix.sh@70 -- # (( atime_of == 1713359741 )) 00:31:40.875 13:15:44 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:40.875 [2024-04-17 13:15:44.893873] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:40.875 [2024-04-17 13:15:44.894115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144616 ] 00:31:41.132 [2024-04-17 13:15:45.072114] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.390 [2024-04-17 13:15:45.312166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.638  Copying: 512/512 [B] (average 500 kBps) 00:31:42.638 00:31:42.638 13:15:46 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:42.638 13:15:46 -- dd/posix.sh@73 -- # (( atime_if < 1713359745 )) 00:31:42.638 ************************************ 00:31:42.638 END TEST dd_flag_noatime_forced_aio 00:31:42.638 ************************************ 00:31:42.638 00:31:42.638 real 0m4.892s 00:31:42.638 user 0m3.189s 00:31:42.638 sys 0m0.450s 00:31:42.638 13:15:46 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:42.638 13:15:46 -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 13:15:46 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:31:42.897 13:15:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:42.897 13:15:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:42.897 13:15:46 -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 ************************************ 00:31:42.897 START TEST dd_flags_misc_forced_aio 00:31:42.897 ************************************ 00:31:42.897 13:15:46 -- common/autotest_common.sh@1099 -- # io 00:31:42.897 13:15:46 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:31:42.897 13:15:46 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:31:42.897 13:15:46 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:31:42.897 13:15:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:42.897 13:15:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:42.897 13:15:46 -- dd/common.sh@98 -- # xtrace_disable 00:31:42.897 13:15:46 -- common/autotest_common.sh@10 -- # set +x 00:31:42.897 13:15:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:42.897 13:15:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:42.897 [2024-04-17 13:15:46.918911] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:42.897 [2024-04-17 13:15:46.919143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144669 ] 00:31:43.155 [2024-04-17 13:15:47.098862] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.415 [2024-04-17 13:15:47.347703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.050  Copying: 512/512 [B] (average 500 kBps) 00:31:45.050 00:31:45.051 13:15:48 -- dd/posix.sh@93 -- # [[ vp0zuikyyrc2njv9g22r6an0qjooe4e6ttszseisy4ovgx4wsxxzu9dl2mpl2bigagcyjxlhhiu567f60fc240gxwaf567cnanfu6fzhouzd820uz2y89dwe3a48pg33ao6prhuucozbac3m4miropwcla1k1odmygkod2d2uzpgfqeas90auqpqtvrtafazlahaag93hhaqsf0j84iv9gk1b20lwzvk2jia2iebcrqiwr0ywi3i1e17wdbiydk4sryl6w5cl2kyoabnvstoezjncnlrwemozuwzs9i7jifm84y867tsd17o9pyxl8d664mjp4t2t009tp1g2s5kcgqsmd04adyogsx3mhxue1o367lqejz5ldsq4iczczzrc2intidqcbzevmjioa1wxtsi3b4m89z2050b3fvciozdfipfnrsqghm6k8izp9yvpnkrky5ujcwwt7f9sbb2nm4emnfkozxzhjgoqmqbeiry2l0ux4h99n2o15qw7z9r == \v\p\0\z\u\i\k\y\y\r\c\2\n\j\v\9\g\2\2\r\6\a\n\0\q\j\o\o\e\4\e\6\t\t\s\z\s\e\i\s\y\4\o\v\g\x\4\w\s\x\x\z\u\9\d\l\2\m\p\l\2\b\i\g\a\g\c\y\j\x\l\h\h\i\u\5\6\7\f\6\0\f\c\2\4\0\g\x\w\a\f\5\6\7\c\n\a\n\f\u\6\f\z\h\o\u\z\d\8\2\0\u\z\2\y\8\9\d\w\e\3\a\4\8\p\g\3\3\a\o\6\p\r\h\u\u\c\o\z\b\a\c\3\m\4\m\i\r\o\p\w\c\l\a\1\k\1\o\d\m\y\g\k\o\d\2\d\2\u\z\p\g\f\q\e\a\s\9\0\a\u\q\p\q\t\v\r\t\a\f\a\z\l\a\h\a\a\g\9\3\h\h\a\q\s\f\0\j\8\4\i\v\9\g\k\1\b\2\0\l\w\z\v\k\2\j\i\a\2\i\e\b\c\r\q\i\w\r\0\y\w\i\3\i\1\e\1\7\w\d\b\i\y\d\k\4\s\r\y\l\6\w\5\c\l\2\k\y\o\a\b\n\v\s\t\o\e\z\j\n\c\n\l\r\w\e\m\o\z\u\w\z\s\9\i\7\j\i\f\m\8\4\y\8\6\7\t\s\d\1\7\o\9\p\y\x\l\8\d\6\6\4\m\j\p\4\t\2\t\0\0\9\t\p\1\g\2\s\5\k\c\g\q\s\m\d\0\4\a\d\y\o\g\s\x\3\m\h\x\u\e\1\o\3\6\7\l\q\e\j\z\5\l\d\s\q\4\i\c\z\c\z\z\r\c\2\i\n\t\i\d\q\c\b\z\e\v\m\j\i\o\a\1\w\x\t\s\i\3\b\4\m\8\9\z\2\0\5\0\b\3\f\v\c\i\o\z\d\f\i\p\f\n\r\s\q\g\h\m\6\k\8\i\z\p\9\y\v\p\n\k\r\k\y\5\u\j\c\w\w\t\7\f\9\s\b\b\2\n\m\4\e\m\n\f\k\o\z\x\z\h\j\g\o\q\m\q\b\e\i\r\y\2\l\0\u\x\4\h\9\9\n\2\o\1\5\q\w\7\z\9\r ]] 00:31:45.051 13:15:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:45.051 13:15:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:45.051 [2024-04-17 13:15:48.880106] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:45.051 [2024-04-17 13:15:48.880288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144696 ] 00:31:45.051 [2024-04-17 13:15:49.039646] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.309 [2024-04-17 13:15:49.258790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.948  Copying: 512/512 [B] (average 500 kBps) 00:31:46.948 00:31:46.948 13:15:50 -- dd/posix.sh@93 -- # [[ vp0zuikyyrc2njv9g22r6an0qjooe4e6ttszseisy4ovgx4wsxxzu9dl2mpl2bigagcyjxlhhiu567f60fc240gxwaf567cnanfu6fzhouzd820uz2y89dwe3a48pg33ao6prhuucozbac3m4miropwcla1k1odmygkod2d2uzpgfqeas90auqpqtvrtafazlahaag93hhaqsf0j84iv9gk1b20lwzvk2jia2iebcrqiwr0ywi3i1e17wdbiydk4sryl6w5cl2kyoabnvstoezjncnlrwemozuwzs9i7jifm84y867tsd17o9pyxl8d664mjp4t2t009tp1g2s5kcgqsmd04adyogsx3mhxue1o367lqejz5ldsq4iczczzrc2intidqcbzevmjioa1wxtsi3b4m89z2050b3fvciozdfipfnrsqghm6k8izp9yvpnkrky5ujcwwt7f9sbb2nm4emnfkozxzhjgoqmqbeiry2l0ux4h99n2o15qw7z9r == \v\p\0\z\u\i\k\y\y\r\c\2\n\j\v\9\g\2\2\r\6\a\n\0\q\j\o\o\e\4\e\6\t\t\s\z\s\e\i\s\y\4\o\v\g\x\4\w\s\x\x\z\u\9\d\l\2\m\p\l\2\b\i\g\a\g\c\y\j\x\l\h\h\i\u\5\6\7\f\6\0\f\c\2\4\0\g\x\w\a\f\5\6\7\c\n\a\n\f\u\6\f\z\h\o\u\z\d\8\2\0\u\z\2\y\8\9\d\w\e\3\a\4\8\p\g\3\3\a\o\6\p\r\h\u\u\c\o\z\b\a\c\3\m\4\m\i\r\o\p\w\c\l\a\1\k\1\o\d\m\y\g\k\o\d\2\d\2\u\z\p\g\f\q\e\a\s\9\0\a\u\q\p\q\t\v\r\t\a\f\a\z\l\a\h\a\a\g\9\3\h\h\a\q\s\f\0\j\8\4\i\v\9\g\k\1\b\2\0\l\w\z\v\k\2\j\i\a\2\i\e\b\c\r\q\i\w\r\0\y\w\i\3\i\1\e\1\7\w\d\b\i\y\d\k\4\s\r\y\l\6\w\5\c\l\2\k\y\o\a\b\n\v\s\t\o\e\z\j\n\c\n\l\r\w\e\m\o\z\u\w\z\s\9\i\7\j\i\f\m\8\4\y\8\6\7\t\s\d\1\7\o\9\p\y\x\l\8\d\6\6\4\m\j\p\4\t\2\t\0\0\9\t\p\1\g\2\s\5\k\c\g\q\s\m\d\0\4\a\d\y\o\g\s\x\3\m\h\x\u\e\1\o\3\6\7\l\q\e\j\z\5\l\d\s\q\4\i\c\z\c\z\z\r\c\2\i\n\t\i\d\q\c\b\z\e\v\m\j\i\o\a\1\w\x\t\s\i\3\b\4\m\8\9\z\2\0\5\0\b\3\f\v\c\i\o\z\d\f\i\p\f\n\r\s\q\g\h\m\6\k\8\i\z\p\9\y\v\p\n\k\r\k\y\5\u\j\c\w\w\t\7\f\9\s\b\b\2\n\m\4\e\m\n\f\k\o\z\x\z\h\j\g\o\q\m\q\b\e\i\r\y\2\l\0\u\x\4\h\9\9\n\2\o\1\5\q\w\7\z\9\r ]] 00:31:46.948 13:15:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:46.948 13:15:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:46.948 [2024-04-17 13:15:50.822540] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:46.948 [2024-04-17 13:15:50.823281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144720 ] 00:31:46.948 [2024-04-17 13:15:50.990962] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.207 [2024-04-17 13:15:51.237283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.840  Copying: 512/512 [B] (average 250 kBps) 00:31:48.840 00:31:48.840 13:15:52 -- dd/posix.sh@93 -- # [[ vp0zuikyyrc2njv9g22r6an0qjooe4e6ttszseisy4ovgx4wsxxzu9dl2mpl2bigagcyjxlhhiu567f60fc240gxwaf567cnanfu6fzhouzd820uz2y89dwe3a48pg33ao6prhuucozbac3m4miropwcla1k1odmygkod2d2uzpgfqeas90auqpqtvrtafazlahaag93hhaqsf0j84iv9gk1b20lwzvk2jia2iebcrqiwr0ywi3i1e17wdbiydk4sryl6w5cl2kyoabnvstoezjncnlrwemozuwzs9i7jifm84y867tsd17o9pyxl8d664mjp4t2t009tp1g2s5kcgqsmd04adyogsx3mhxue1o367lqejz5ldsq4iczczzrc2intidqcbzevmjioa1wxtsi3b4m89z2050b3fvciozdfipfnrsqghm6k8izp9yvpnkrky5ujcwwt7f9sbb2nm4emnfkozxzhjgoqmqbeiry2l0ux4h99n2o15qw7z9r == \v\p\0\z\u\i\k\y\y\r\c\2\n\j\v\9\g\2\2\r\6\a\n\0\q\j\o\o\e\4\e\6\t\t\s\z\s\e\i\s\y\4\o\v\g\x\4\w\s\x\x\z\u\9\d\l\2\m\p\l\2\b\i\g\a\g\c\y\j\x\l\h\h\i\u\5\6\7\f\6\0\f\c\2\4\0\g\x\w\a\f\5\6\7\c\n\a\n\f\u\6\f\z\h\o\u\z\d\8\2\0\u\z\2\y\8\9\d\w\e\3\a\4\8\p\g\3\3\a\o\6\p\r\h\u\u\c\o\z\b\a\c\3\m\4\m\i\r\o\p\w\c\l\a\1\k\1\o\d\m\y\g\k\o\d\2\d\2\u\z\p\g\f\q\e\a\s\9\0\a\u\q\p\q\t\v\r\t\a\f\a\z\l\a\h\a\a\g\9\3\h\h\a\q\s\f\0\j\8\4\i\v\9\g\k\1\b\2\0\l\w\z\v\k\2\j\i\a\2\i\e\b\c\r\q\i\w\r\0\y\w\i\3\i\1\e\1\7\w\d\b\i\y\d\k\4\s\r\y\l\6\w\5\c\l\2\k\y\o\a\b\n\v\s\t\o\e\z\j\n\c\n\l\r\w\e\m\o\z\u\w\z\s\9\i\7\j\i\f\m\8\4\y\8\6\7\t\s\d\1\7\o\9\p\y\x\l\8\d\6\6\4\m\j\p\4\t\2\t\0\0\9\t\p\1\g\2\s\5\k\c\g\q\s\m\d\0\4\a\d\y\o\g\s\x\3\m\h\x\u\e\1\o\3\6\7\l\q\e\j\z\5\l\d\s\q\4\i\c\z\c\z\z\r\c\2\i\n\t\i\d\q\c\b\z\e\v\m\j\i\o\a\1\w\x\t\s\i\3\b\4\m\8\9\z\2\0\5\0\b\3\f\v\c\i\o\z\d\f\i\p\f\n\r\s\q\g\h\m\6\k\8\i\z\p\9\y\v\p\n\k\r\k\y\5\u\j\c\w\w\t\7\f\9\s\b\b\2\n\m\4\e\m\n\f\k\o\z\x\z\h\j\g\o\q\m\q\b\e\i\r\y\2\l\0\u\x\4\h\9\9\n\2\o\1\5\q\w\7\z\9\r ]] 00:31:48.840 13:15:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:48.840 13:15:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:48.840 [2024-04-17 13:15:52.757230] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:48.840 [2024-04-17 13:15:52.758033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144748 ] 00:31:48.840 [2024-04-17 13:15:52.933030] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.111 [2024-04-17 13:15:53.169623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.775  Copying: 512/512 [B] (average 250 kBps) 00:31:50.775 00:31:50.776 13:15:54 -- dd/posix.sh@93 -- # [[ vp0zuikyyrc2njv9g22r6an0qjooe4e6ttszseisy4ovgx4wsxxzu9dl2mpl2bigagcyjxlhhiu567f60fc240gxwaf567cnanfu6fzhouzd820uz2y89dwe3a48pg33ao6prhuucozbac3m4miropwcla1k1odmygkod2d2uzpgfqeas90auqpqtvrtafazlahaag93hhaqsf0j84iv9gk1b20lwzvk2jia2iebcrqiwr0ywi3i1e17wdbiydk4sryl6w5cl2kyoabnvstoezjncnlrwemozuwzs9i7jifm84y867tsd17o9pyxl8d664mjp4t2t009tp1g2s5kcgqsmd04adyogsx3mhxue1o367lqejz5ldsq4iczczzrc2intidqcbzevmjioa1wxtsi3b4m89z2050b3fvciozdfipfnrsqghm6k8izp9yvpnkrky5ujcwwt7f9sbb2nm4emnfkozxzhjgoqmqbeiry2l0ux4h99n2o15qw7z9r == \v\p\0\z\u\i\k\y\y\r\c\2\n\j\v\9\g\2\2\r\6\a\n\0\q\j\o\o\e\4\e\6\t\t\s\z\s\e\i\s\y\4\o\v\g\x\4\w\s\x\x\z\u\9\d\l\2\m\p\l\2\b\i\g\a\g\c\y\j\x\l\h\h\i\u\5\6\7\f\6\0\f\c\2\4\0\g\x\w\a\f\5\6\7\c\n\a\n\f\u\6\f\z\h\o\u\z\d\8\2\0\u\z\2\y\8\9\d\w\e\3\a\4\8\p\g\3\3\a\o\6\p\r\h\u\u\c\o\z\b\a\c\3\m\4\m\i\r\o\p\w\c\l\a\1\k\1\o\d\m\y\g\k\o\d\2\d\2\u\z\p\g\f\q\e\a\s\9\0\a\u\q\p\q\t\v\r\t\a\f\a\z\l\a\h\a\a\g\9\3\h\h\a\q\s\f\0\j\8\4\i\v\9\g\k\1\b\2\0\l\w\z\v\k\2\j\i\a\2\i\e\b\c\r\q\i\w\r\0\y\w\i\3\i\1\e\1\7\w\d\b\i\y\d\k\4\s\r\y\l\6\w\5\c\l\2\k\y\o\a\b\n\v\s\t\o\e\z\j\n\c\n\l\r\w\e\m\o\z\u\w\z\s\9\i\7\j\i\f\m\8\4\y\8\6\7\t\s\d\1\7\o\9\p\y\x\l\8\d\6\6\4\m\j\p\4\t\2\t\0\0\9\t\p\1\g\2\s\5\k\c\g\q\s\m\d\0\4\a\d\y\o\g\s\x\3\m\h\x\u\e\1\o\3\6\7\l\q\e\j\z\5\l\d\s\q\4\i\c\z\c\z\z\r\c\2\i\n\t\i\d\q\c\b\z\e\v\m\j\i\o\a\1\w\x\t\s\i\3\b\4\m\8\9\z\2\0\5\0\b\3\f\v\c\i\o\z\d\f\i\p\f\n\r\s\q\g\h\m\6\k\8\i\z\p\9\y\v\p\n\k\r\k\y\5\u\j\c\w\w\t\7\f\9\s\b\b\2\n\m\4\e\m\n\f\k\o\z\x\z\h\j\g\o\q\m\q\b\e\i\r\y\2\l\0\u\x\4\h\9\9\n\2\o\1\5\q\w\7\z\9\r ]] 00:31:50.776 13:15:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:50.776 13:15:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:50.776 13:15:54 -- dd/common.sh@98 -- # xtrace_disable 00:31:50.776 13:15:54 -- common/autotest_common.sh@10 -- # set +x 00:31:50.776 13:15:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:50.776 13:15:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:50.776 [2024-04-17 13:15:54.734809] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:50.776 [2024-04-17 13:15:54.735026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144773 ] 00:31:50.776 [2024-04-17 13:15:54.898138] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.034 [2024-04-17 13:15:55.173403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.591  Copying: 512/512 [B] (average 500 kBps) 00:31:52.591 00:31:52.591 13:15:56 -- dd/posix.sh@93 -- # [[ lqxzyhz386llnaf31s5kmwfmpxvb3jxnzcq3m964va7ola0awo6jsqtnlhmdxegorokga8ob9vkpxabnis6a1l5hlfbo3wmzdrhjc4kbxelqgeqg5rcz32tsw33fx87tpu15scl9zauna824jc1yhw1rgv60fxqp4dszr4fpga61lq3ivij4yqfd3jjy8smpfelw6z2uovfsh8uzkzrt48jd9ek4pbld7obffj9gda08ik9xgaaohkxf01mts8s1o3gno06buigphrhn5wm18bwp3s4afw64pdz4euokhivglrvco2rsujeub5mc9f2snfj7v9td0cwrqglrbjpmwlgn9r5m8dfstcqr879gdcdkjiht1ii4k8v8jctul75ofhz6r3z61e0k0beqnsxs1n13ugemi0cc3pbpg4woxdcnmdw7kvgqpoc12h64l14t2bxrc33bitfic30a2sg0uclmsbqv5k9ubqotv4l1qehfzkmprca2yvnft9ypxggy == \l\q\x\z\y\h\z\3\8\6\l\l\n\a\f\3\1\s\5\k\m\w\f\m\p\x\v\b\3\j\x\n\z\c\q\3\m\9\6\4\v\a\7\o\l\a\0\a\w\o\6\j\s\q\t\n\l\h\m\d\x\e\g\o\r\o\k\g\a\8\o\b\9\v\k\p\x\a\b\n\i\s\6\a\1\l\5\h\l\f\b\o\3\w\m\z\d\r\h\j\c\4\k\b\x\e\l\q\g\e\q\g\5\r\c\z\3\2\t\s\w\3\3\f\x\8\7\t\p\u\1\5\s\c\l\9\z\a\u\n\a\8\2\4\j\c\1\y\h\w\1\r\g\v\6\0\f\x\q\p\4\d\s\z\r\4\f\p\g\a\6\1\l\q\3\i\v\i\j\4\y\q\f\d\3\j\j\y\8\s\m\p\f\e\l\w\6\z\2\u\o\v\f\s\h\8\u\z\k\z\r\t\4\8\j\d\9\e\k\4\p\b\l\d\7\o\b\f\f\j\9\g\d\a\0\8\i\k\9\x\g\a\a\o\h\k\x\f\0\1\m\t\s\8\s\1\o\3\g\n\o\0\6\b\u\i\g\p\h\r\h\n\5\w\m\1\8\b\w\p\3\s\4\a\f\w\6\4\p\d\z\4\e\u\o\k\h\i\v\g\l\r\v\c\o\2\r\s\u\j\e\u\b\5\m\c\9\f\2\s\n\f\j\7\v\9\t\d\0\c\w\r\q\g\l\r\b\j\p\m\w\l\g\n\9\r\5\m\8\d\f\s\t\c\q\r\8\7\9\g\d\c\d\k\j\i\h\t\1\i\i\4\k\8\v\8\j\c\t\u\l\7\5\o\f\h\z\6\r\3\z\6\1\e\0\k\0\b\e\q\n\s\x\s\1\n\1\3\u\g\e\m\i\0\c\c\3\p\b\p\g\4\w\o\x\d\c\n\m\d\w\7\k\v\g\q\p\o\c\1\2\h\6\4\l\1\4\t\2\b\x\r\c\3\3\b\i\t\f\i\c\3\0\a\2\s\g\0\u\c\l\m\s\b\q\v\5\k\9\u\b\q\o\t\v\4\l\1\q\e\h\f\z\k\m\p\r\c\a\2\y\v\n\f\t\9\y\p\x\g\g\y ]] 00:31:52.591 13:15:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:52.591 13:15:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:52.849 [2024-04-17 13:15:56.769283] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:52.849 [2024-04-17 13:15:56.769499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144817 ] 00:31:52.849 [2024-04-17 13:15:56.938581] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.108 [2024-04-17 13:15:57.151227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.740  Copying: 512/512 [B] (average 500 kBps) 00:31:54.740 00:31:54.741 13:15:58 -- dd/posix.sh@93 -- # [[ lqxzyhz386llnaf31s5kmwfmpxvb3jxnzcq3m964va7ola0awo6jsqtnlhmdxegorokga8ob9vkpxabnis6a1l5hlfbo3wmzdrhjc4kbxelqgeqg5rcz32tsw33fx87tpu15scl9zauna824jc1yhw1rgv60fxqp4dszr4fpga61lq3ivij4yqfd3jjy8smpfelw6z2uovfsh8uzkzrt48jd9ek4pbld7obffj9gda08ik9xgaaohkxf01mts8s1o3gno06buigphrhn5wm18bwp3s4afw64pdz4euokhivglrvco2rsujeub5mc9f2snfj7v9td0cwrqglrbjpmwlgn9r5m8dfstcqr879gdcdkjiht1ii4k8v8jctul75ofhz6r3z61e0k0beqnsxs1n13ugemi0cc3pbpg4woxdcnmdw7kvgqpoc12h64l14t2bxrc33bitfic30a2sg0uclmsbqv5k9ubqotv4l1qehfzkmprca2yvnft9ypxggy == \l\q\x\z\y\h\z\3\8\6\l\l\n\a\f\3\1\s\5\k\m\w\f\m\p\x\v\b\3\j\x\n\z\c\q\3\m\9\6\4\v\a\7\o\l\a\0\a\w\o\6\j\s\q\t\n\l\h\m\d\x\e\g\o\r\o\k\g\a\8\o\b\9\v\k\p\x\a\b\n\i\s\6\a\1\l\5\h\l\f\b\o\3\w\m\z\d\r\h\j\c\4\k\b\x\e\l\q\g\e\q\g\5\r\c\z\3\2\t\s\w\3\3\f\x\8\7\t\p\u\1\5\s\c\l\9\z\a\u\n\a\8\2\4\j\c\1\y\h\w\1\r\g\v\6\0\f\x\q\p\4\d\s\z\r\4\f\p\g\a\6\1\l\q\3\i\v\i\j\4\y\q\f\d\3\j\j\y\8\s\m\p\f\e\l\w\6\z\2\u\o\v\f\s\h\8\u\z\k\z\r\t\4\8\j\d\9\e\k\4\p\b\l\d\7\o\b\f\f\j\9\g\d\a\0\8\i\k\9\x\g\a\a\o\h\k\x\f\0\1\m\t\s\8\s\1\o\3\g\n\o\0\6\b\u\i\g\p\h\r\h\n\5\w\m\1\8\b\w\p\3\s\4\a\f\w\6\4\p\d\z\4\e\u\o\k\h\i\v\g\l\r\v\c\o\2\r\s\u\j\e\u\b\5\m\c\9\f\2\s\n\f\j\7\v\9\t\d\0\c\w\r\q\g\l\r\b\j\p\m\w\l\g\n\9\r\5\m\8\d\f\s\t\c\q\r\8\7\9\g\d\c\d\k\j\i\h\t\1\i\i\4\k\8\v\8\j\c\t\u\l\7\5\o\f\h\z\6\r\3\z\6\1\e\0\k\0\b\e\q\n\s\x\s\1\n\1\3\u\g\e\m\i\0\c\c\3\p\b\p\g\4\w\o\x\d\c\n\m\d\w\7\k\v\g\q\p\o\c\1\2\h\6\4\l\1\4\t\2\b\x\r\c\3\3\b\i\t\f\i\c\3\0\a\2\s\g\0\u\c\l\m\s\b\q\v\5\k\9\u\b\q\o\t\v\4\l\1\q\e\h\f\z\k\m\p\r\c\a\2\y\v\n\f\t\9\y\p\x\g\g\y ]] 00:31:54.741 13:15:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:54.741 13:15:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:54.741 [2024-04-17 13:15:58.679676] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:54.741 [2024-04-17 13:15:58.679893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144842 ] 00:31:54.741 [2024-04-17 13:15:58.848158] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.999 [2024-04-17 13:15:59.099059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.500  Copying: 512/512 [B] (average 250 kBps) 00:31:56.500 00:31:56.500 13:16:00 -- dd/posix.sh@93 -- # [[ lqxzyhz386llnaf31s5kmwfmpxvb3jxnzcq3m964va7ola0awo6jsqtnlhmdxegorokga8ob9vkpxabnis6a1l5hlfbo3wmzdrhjc4kbxelqgeqg5rcz32tsw33fx87tpu15scl9zauna824jc1yhw1rgv60fxqp4dszr4fpga61lq3ivij4yqfd3jjy8smpfelw6z2uovfsh8uzkzrt48jd9ek4pbld7obffj9gda08ik9xgaaohkxf01mts8s1o3gno06buigphrhn5wm18bwp3s4afw64pdz4euokhivglrvco2rsujeub5mc9f2snfj7v9td0cwrqglrbjpmwlgn9r5m8dfstcqr879gdcdkjiht1ii4k8v8jctul75ofhz6r3z61e0k0beqnsxs1n13ugemi0cc3pbpg4woxdcnmdw7kvgqpoc12h64l14t2bxrc33bitfic30a2sg0uclmsbqv5k9ubqotv4l1qehfzkmprca2yvnft9ypxggy == \l\q\x\z\y\h\z\3\8\6\l\l\n\a\f\3\1\s\5\k\m\w\f\m\p\x\v\b\3\j\x\n\z\c\q\3\m\9\6\4\v\a\7\o\l\a\0\a\w\o\6\j\s\q\t\n\l\h\m\d\x\e\g\o\r\o\k\g\a\8\o\b\9\v\k\p\x\a\b\n\i\s\6\a\1\l\5\h\l\f\b\o\3\w\m\z\d\r\h\j\c\4\k\b\x\e\l\q\g\e\q\g\5\r\c\z\3\2\t\s\w\3\3\f\x\8\7\t\p\u\1\5\s\c\l\9\z\a\u\n\a\8\2\4\j\c\1\y\h\w\1\r\g\v\6\0\f\x\q\p\4\d\s\z\r\4\f\p\g\a\6\1\l\q\3\i\v\i\j\4\y\q\f\d\3\j\j\y\8\s\m\p\f\e\l\w\6\z\2\u\o\v\f\s\h\8\u\z\k\z\r\t\4\8\j\d\9\e\k\4\p\b\l\d\7\o\b\f\f\j\9\g\d\a\0\8\i\k\9\x\g\a\a\o\h\k\x\f\0\1\m\t\s\8\s\1\o\3\g\n\o\0\6\b\u\i\g\p\h\r\h\n\5\w\m\1\8\b\w\p\3\s\4\a\f\w\6\4\p\d\z\4\e\u\o\k\h\i\v\g\l\r\v\c\o\2\r\s\u\j\e\u\b\5\m\c\9\f\2\s\n\f\j\7\v\9\t\d\0\c\w\r\q\g\l\r\b\j\p\m\w\l\g\n\9\r\5\m\8\d\f\s\t\c\q\r\8\7\9\g\d\c\d\k\j\i\h\t\1\i\i\4\k\8\v\8\j\c\t\u\l\7\5\o\f\h\z\6\r\3\z\6\1\e\0\k\0\b\e\q\n\s\x\s\1\n\1\3\u\g\e\m\i\0\c\c\3\p\b\p\g\4\w\o\x\d\c\n\m\d\w\7\k\v\g\q\p\o\c\1\2\h\6\4\l\1\4\t\2\b\x\r\c\3\3\b\i\t\f\i\c\3\0\a\2\s\g\0\u\c\l\m\s\b\q\v\5\k\9\u\b\q\o\t\v\4\l\1\q\e\h\f\z\k\m\p\r\c\a\2\y\v\n\f\t\9\y\p\x\g\g\y ]] 00:31:56.500 13:16:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:56.500 13:16:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:56.759 [2024-04-17 13:16:00.662262] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:56.759 [2024-04-17 13:16:00.662452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144866 ] 00:31:56.759 [2024-04-17 13:16:00.830378] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.017 [2024-04-17 13:16:01.041796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.646  Copying: 512/512 [B] (average 500 kBps) 00:31:58.646 00:31:58.646 ************************************ 00:31:58.646 END TEST dd_flags_misc_forced_aio 00:31:58.646 ************************************ 00:31:58.646 13:16:02 -- dd/posix.sh@93 -- # [[ lqxzyhz386llnaf31s5kmwfmpxvb3jxnzcq3m964va7ola0awo6jsqtnlhmdxegorokga8ob9vkpxabnis6a1l5hlfbo3wmzdrhjc4kbxelqgeqg5rcz32tsw33fx87tpu15scl9zauna824jc1yhw1rgv60fxqp4dszr4fpga61lq3ivij4yqfd3jjy8smpfelw6z2uovfsh8uzkzrt48jd9ek4pbld7obffj9gda08ik9xgaaohkxf01mts8s1o3gno06buigphrhn5wm18bwp3s4afw64pdz4euokhivglrvco2rsujeub5mc9f2snfj7v9td0cwrqglrbjpmwlgn9r5m8dfstcqr879gdcdkjiht1ii4k8v8jctul75ofhz6r3z61e0k0beqnsxs1n13ugemi0cc3pbpg4woxdcnmdw7kvgqpoc12h64l14t2bxrc33bitfic30a2sg0uclmsbqv5k9ubqotv4l1qehfzkmprca2yvnft9ypxggy == \l\q\x\z\y\h\z\3\8\6\l\l\n\a\f\3\1\s\5\k\m\w\f\m\p\x\v\b\3\j\x\n\z\c\q\3\m\9\6\4\v\a\7\o\l\a\0\a\w\o\6\j\s\q\t\n\l\h\m\d\x\e\g\o\r\o\k\g\a\8\o\b\9\v\k\p\x\a\b\n\i\s\6\a\1\l\5\h\l\f\b\o\3\w\m\z\d\r\h\j\c\4\k\b\x\e\l\q\g\e\q\g\5\r\c\z\3\2\t\s\w\3\3\f\x\8\7\t\p\u\1\5\s\c\l\9\z\a\u\n\a\8\2\4\j\c\1\y\h\w\1\r\g\v\6\0\f\x\q\p\4\d\s\z\r\4\f\p\g\a\6\1\l\q\3\i\v\i\j\4\y\q\f\d\3\j\j\y\8\s\m\p\f\e\l\w\6\z\2\u\o\v\f\s\h\8\u\z\k\z\r\t\4\8\j\d\9\e\k\4\p\b\l\d\7\o\b\f\f\j\9\g\d\a\0\8\i\k\9\x\g\a\a\o\h\k\x\f\0\1\m\t\s\8\s\1\o\3\g\n\o\0\6\b\u\i\g\p\h\r\h\n\5\w\m\1\8\b\w\p\3\s\4\a\f\w\6\4\p\d\z\4\e\u\o\k\h\i\v\g\l\r\v\c\o\2\r\s\u\j\e\u\b\5\m\c\9\f\2\s\n\f\j\7\v\9\t\d\0\c\w\r\q\g\l\r\b\j\p\m\w\l\g\n\9\r\5\m\8\d\f\s\t\c\q\r\8\7\9\g\d\c\d\k\j\i\h\t\1\i\i\4\k\8\v\8\j\c\t\u\l\7\5\o\f\h\z\6\r\3\z\6\1\e\0\k\0\b\e\q\n\s\x\s\1\n\1\3\u\g\e\m\i\0\c\c\3\p\b\p\g\4\w\o\x\d\c\n\m\d\w\7\k\v\g\q\p\o\c\1\2\h\6\4\l\1\4\t\2\b\x\r\c\3\3\b\i\t\f\i\c\3\0\a\2\s\g\0\u\c\l\m\s\b\q\v\5\k\9\u\b\q\o\t\v\4\l\1\q\e\h\f\z\k\m\p\r\c\a\2\y\v\n\f\t\9\y\p\x\g\g\y ]] 00:31:58.646 00:31:58.646 real 0m15.658s 00:31:58.646 user 0m12.718s 00:31:58.646 sys 0m1.872s 00:31:58.646 13:16:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:58.646 13:16:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.646 13:16:02 -- dd/posix.sh@1 -- # cleanup 00:31:58.646 13:16:02 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:58.646 13:16:02 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:58.646 00:31:58.646 real 1m6.054s 00:31:58.646 user 0m52.074s 00:31:58.646 sys 0m7.863s 00:31:58.646 13:16:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:31:58.646 13:16:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.646 ************************************ 00:31:58.646 END TEST spdk_dd_posix 00:31:58.646 ************************************ 00:31:58.646 13:16:02 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:58.646 13:16:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:58.646 13:16:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:58.646 13:16:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.646 ************************************ 00:31:58.646 START TEST spdk_dd_malloc 00:31:58.646 ************************************ 00:31:58.646 13:16:02 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:31:58.646 * Looking for test storage... 00:31:58.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:58.646 13:16:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:58.646 13:16:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.646 13:16:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.646 13:16:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.646 13:16:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:58.646 13:16:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:58.646 13:16:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:58.646 13:16:02 -- paths/export.sh@5 -- # export PATH 00:31:58.646 13:16:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:58.646 13:16:02 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:31:58.646 13:16:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:58.646 13:16:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:58.646 13:16:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.646 ************************************ 00:31:58.646 START TEST dd_malloc_copy 00:31:58.646 ************************************ 00:31:58.646 13:16:02 -- common/autotest_common.sh@1099 -- # malloc_copy 00:31:58.646 13:16:02 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:31:58.647 13:16:02 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:31:58.647 13:16:02 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:31:58.647 13:16:02 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:31:58.647 13:16:02 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:31:58.647 13:16:02 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:31:58.647 13:16:02 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:31:58.647 13:16:02 -- dd/malloc.sh@28 -- # gen_conf 00:31:58.647 13:16:02 -- dd/common.sh@31 -- # xtrace_disable 00:31:58.647 13:16:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.647 { 00:31:58.647 "subsystems": [ 00:31:58.647 { 00:31:58.647 "subsystem": "bdev", 00:31:58.647 "config": [ 00:31:58.647 { 00:31:58.647 "params": { 00:31:58.647 "num_blocks": 1048576, 00:31:58.647 "block_size": 512, 00:31:58.647 "name": "malloc0" 00:31:58.647 }, 00:31:58.647 "method": "bdev_malloc_create" 00:31:58.647 }, 00:31:58.647 { 00:31:58.647 "params": { 00:31:58.647 "num_blocks": 1048576, 00:31:58.647 "block_size": 512, 00:31:58.647 "name": "malloc1" 00:31:58.647 }, 00:31:58.647 "method": "bdev_malloc_create" 00:31:58.647 }, 00:31:58.647 { 00:31:58.647 "method": "bdev_wait_for_examine" 00:31:58.647 } 00:31:58.647 ] 00:31:58.647 } 00:31:58.647 ] 00:31:58.647 } 00:31:58.647 [2024-04-17 13:16:02.792262] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:31:58.647 [2024-04-17 13:16:02.792601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144973 ] 00:31:58.906 [2024-04-17 13:16:02.961564] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.164 [2024-04-17 13:16:03.214610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.769  Copying: 172/512 [MB] (172 MBps) Copying: 349/512 [MB] (177 MBps) Copying: 512/512 [MB] (average 175 MBps) 00:32:07.769 00:32:07.769 13:16:11 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:32:07.769 13:16:11 -- dd/malloc.sh@33 -- # gen_conf 00:32:07.769 13:16:11 -- dd/common.sh@31 -- # xtrace_disable 00:32:07.769 13:16:11 -- common/autotest_common.sh@10 -- # set +x 00:32:07.769 [2024-04-17 13:16:11.274423] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:07.769 [2024-04-17 13:16:11.274594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145092 ] 00:32:07.769 { 00:32:07.769 "subsystems": [ 00:32:07.769 { 00:32:07.769 "subsystem": "bdev", 00:32:07.769 "config": [ 00:32:07.769 { 00:32:07.769 "params": { 00:32:07.769 "num_blocks": 1048576, 00:32:07.769 "block_size": 512, 00:32:07.769 "name": "malloc0" 00:32:07.769 }, 00:32:07.769 "method": "bdev_malloc_create" 00:32:07.769 }, 00:32:07.769 { 00:32:07.769 "params": { 00:32:07.769 "num_blocks": 1048576, 00:32:07.769 "block_size": 512, 00:32:07.769 "name": "malloc1" 00:32:07.769 }, 00:32:07.769 "method": "bdev_malloc_create" 00:32:07.769 }, 00:32:07.769 { 00:32:07.769 "method": "bdev_wait_for_examine" 00:32:07.769 } 00:32:07.769 ] 00:32:07.769 } 00:32:07.769 ] 00:32:07.769 } 00:32:07.769 [2024-04-17 13:16:11.436730] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.769 [2024-04-17 13:16:11.647269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.035  Copying: 174/512 [MB] (174 MBps) Copying: 350/512 [MB] (176 MBps) Copying: 512/512 [MB] (average 173 MBps) 00:32:16.035 00:32:16.035 ************************************ 00:32:16.035 END TEST dd_malloc_copy 00:32:16.035 ************************************ 00:32:16.035 00:32:16.035 real 0m16.927s 00:32:16.035 user 0m15.551s 00:32:16.035 sys 0m1.208s 00:32:16.035 13:16:19 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:16.035 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:32:16.035 00:32:16.035 real 0m17.081s 00:32:16.035 user 0m15.632s 00:32:16.035 sys 0m1.282s 00:32:16.035 13:16:19 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:16.035 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:32:16.035 ************************************ 00:32:16.035 END TEST spdk_dd_malloc 00:32:16.035 ************************************ 00:32:16.035 13:16:19 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:32:16.035 13:16:19 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:32:16.035 13:16:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:16.035 13:16:19 -- common/autotest_common.sh@10 -- # set +x 00:32:16.035 ************************************ 00:32:16.035 START TEST spdk_dd_bdev_to_bdev 00:32:16.035 ************************************ 00:32:16.035 13:16:19 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:32:16.035 * Looking for test storage... 00:32:16.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:16.035 13:16:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:16.035 13:16:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.035 13:16:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.035 13:16:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.035 13:16:19 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:16.035 13:16:19 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:16.035 13:16:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:16.035 13:16:19 -- paths/export.sh@5 -- # export PATH 00:32:16.035 13:16:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:32:16.035 13:16:19 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:32:16.035 [2024-04-17 13:16:19.908897] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:16.035 [2024-04-17 13:16:19.909109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145281 ] 00:32:16.035 [2024-04-17 13:16:20.078566] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.294 [2024-04-17 13:16:20.345677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.278  Copying: 256/256 [MB] (average 1201 MBps) 00:32:18.278 00:32:18.278 13:16:22 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:18.278 13:16:22 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:18.278 13:16:22 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:32:18.278 13:16:22 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:32:18.278 13:16:22 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:32:18.278 13:16:22 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:32:18.278 13:16:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:18.278 13:16:22 -- common/autotest_common.sh@10 -- # set +x 00:32:18.278 ************************************ 00:32:18.278 START TEST dd_inflate_file 00:32:18.278 ************************************ 00:32:18.278 13:16:22 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:32:18.278 [2024-04-17 13:16:22.207201] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:18.278 [2024-04-17 13:16:22.207386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145321 ] 00:32:18.278 [2024-04-17 13:16:22.370729] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.537 [2024-04-17 13:16:22.634305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.046  Copying: 64/64 [MB] (average 1032 MBps) 00:32:20.046 00:32:20.046 ************************************ 00:32:20.046 END TEST dd_inflate_file 00:32:20.046 ************************************ 00:32:20.046 00:32:20.046 real 0m2.022s 00:32:20.046 user 0m1.599s 00:32:20.046 sys 0m0.285s 00:32:20.046 13:16:24 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:20.046 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:32:20.305 13:16:24 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:32:20.305 13:16:24 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:32:20.305 13:16:24 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:32:20.305 13:16:24 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:32:20.305 13:16:24 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:32:20.305 13:16:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:20.305 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:32:20.305 13:16:24 -- dd/common.sh@31 -- # xtrace_disable 00:32:20.305 13:16:24 -- common/autotest_common.sh@10 -- # set +x 00:32:20.305 ************************************ 00:32:20.305 START TEST dd_copy_to_out_bdev 00:32:20.305 ************************************ 00:32:20.305 13:16:24 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:32:20.305 { 00:32:20.305 "subsystems": [ 00:32:20.305 { 00:32:20.305 "subsystem": "bdev", 00:32:20.305 "config": [ 00:32:20.305 { 00:32:20.305 "params": { 00:32:20.305 "block_size": 4096, 00:32:20.305 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:20.305 "name": "aio1" 00:32:20.305 }, 00:32:20.305 "method": "bdev_aio_create" 00:32:20.305 }, 00:32:20.305 { 00:32:20.305 "params": { 00:32:20.305 "trtype": "pcie", 00:32:20.305 "traddr": "0000:00:10.0", 00:32:20.305 "name": "Nvme0" 00:32:20.305 }, 00:32:20.305 "method": "bdev_nvme_attach_controller" 00:32:20.305 }, 00:32:20.305 { 00:32:20.305 "method": "bdev_wait_for_examine" 00:32:20.305 } 00:32:20.305 ] 00:32:20.305 } 00:32:20.305 ] 00:32:20.305 } 00:32:20.305 [2024-04-17 13:16:24.304543] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:20.305 [2024-04-17 13:16:24.304725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145378 ] 00:32:20.564 [2024-04-17 13:16:24.467475] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.824 [2024-04-17 13:16:24.720269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.578  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 55 MBps) 00:32:23.578 00:32:23.578 00:32:23.578 real 0m3.242s 00:32:23.578 user 0m2.836s 00:32:23.578 sys 0m0.301s 00:32:23.578 13:16:27 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:23.578 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.578 ************************************ 00:32:23.578 END TEST dd_copy_to_out_bdev 00:32:23.578 ************************************ 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:32:23.578 13:16:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:23.578 13:16:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:23.578 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.578 ************************************ 00:32:23.578 START TEST dd_offset_magic 00:32:23.578 ************************************ 00:32:23.578 13:16:27 -- common/autotest_common.sh@1099 -- # offset_magic 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:32:23.578 13:16:27 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:32:23.578 13:16:27 -- dd/common.sh@31 -- # xtrace_disable 00:32:23.578 13:16:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.578 { 00:32:23.578 "subsystems": [ 00:32:23.578 { 00:32:23.578 "subsystem": "bdev", 00:32:23.578 "config": [ 00:32:23.578 { 00:32:23.578 "params": { 00:32:23.578 "block_size": 4096, 00:32:23.578 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:23.578 "name": "aio1" 00:32:23.578 }, 00:32:23.579 "method": "bdev_aio_create" 00:32:23.579 }, 00:32:23.579 { 00:32:23.579 "params": { 00:32:23.579 "trtype": "pcie", 00:32:23.579 "traddr": "0000:00:10.0", 00:32:23.579 "name": "Nvme0" 00:32:23.579 }, 00:32:23.579 "method": "bdev_nvme_attach_controller" 00:32:23.579 }, 00:32:23.579 { 00:32:23.579 "method": "bdev_wait_for_examine" 00:32:23.579 } 00:32:23.579 ] 00:32:23.579 } 00:32:23.579 ] 00:32:23.579 } 00:32:23.579 [2024-04-17 13:16:27.639909] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:23.579 [2024-04-17 13:16:27.640175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145468 ] 00:32:23.838 [2024-04-17 13:16:27.813066] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.096 [2024-04-17 13:16:28.025822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.040  Copying: 65/65 [MB] (average 255 MBps) 00:32:26.040 00:32:26.040 13:16:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:32:26.040 13:16:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:32:26.040 13:16:29 -- dd/common.sh@31 -- # xtrace_disable 00:32:26.040 13:16:29 -- common/autotest_common.sh@10 -- # set +x 00:32:26.040 { 00:32:26.040 "subsystems": [ 00:32:26.040 { 00:32:26.040 "subsystem": "bdev", 00:32:26.040 "config": [ 00:32:26.040 { 00:32:26.040 "params": { 00:32:26.040 "block_size": 4096, 00:32:26.040 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:26.040 "name": "aio1" 00:32:26.040 }, 00:32:26.040 "method": "bdev_aio_create" 00:32:26.040 }, 00:32:26.040 { 00:32:26.040 "params": { 00:32:26.040 "trtype": "pcie", 00:32:26.040 "traddr": "0000:00:10.0", 00:32:26.040 "name": "Nvme0" 00:32:26.040 }, 00:32:26.040 "method": "bdev_nvme_attach_controller" 00:32:26.040 }, 00:32:26.040 { 00:32:26.040 "method": "bdev_wait_for_examine" 00:32:26.040 } 00:32:26.040 ] 00:32:26.040 } 00:32:26.040 ] 00:32:26.040 } 00:32:26.040 [2024-04-17 13:16:29.916915] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:26.040 [2024-04-17 13:16:29.917399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145504 ] 00:32:26.040 [2024-04-17 13:16:30.098823] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.298 [2024-04-17 13:16:30.315401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.801  Copying: 1024/1024 [kB] (average 1000 MBps) 00:32:27.801 00:32:28.059 13:16:31 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:32:28.059 13:16:31 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:32:28.059 13:16:31 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:32:28.059 13:16:31 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:32:28.059 13:16:31 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:32:28.059 13:16:31 -- dd/common.sh@31 -- # xtrace_disable 00:32:28.059 13:16:31 -- common/autotest_common.sh@10 -- # set +x 00:32:28.059 [2024-04-17 13:16:32.007195] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:28.059 [2024-04-17 13:16:32.007906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145537 ] 00:32:28.059 { 00:32:28.059 "subsystems": [ 00:32:28.059 { 00:32:28.059 "subsystem": "bdev", 00:32:28.059 "config": [ 00:32:28.059 { 00:32:28.059 "params": { 00:32:28.059 "block_size": 4096, 00:32:28.059 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:28.059 "name": "aio1" 00:32:28.059 }, 00:32:28.059 "method": "bdev_aio_create" 00:32:28.059 }, 00:32:28.059 { 00:32:28.059 "params": { 00:32:28.059 "trtype": "pcie", 00:32:28.059 "traddr": "0000:00:10.0", 00:32:28.059 "name": "Nvme0" 00:32:28.059 }, 00:32:28.059 "method": "bdev_nvme_attach_controller" 00:32:28.059 }, 00:32:28.059 { 00:32:28.059 "method": "bdev_wait_for_examine" 00:32:28.059 } 00:32:28.059 ] 00:32:28.059 } 00:32:28.059 ] 00:32:28.059 } 00:32:28.059 [2024-04-17 13:16:32.180454] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.318 [2024-04-17 13:16:32.392432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.262  Copying: 65/65 [MB] (average 345 MBps) 00:32:30.262 00:32:30.262 13:16:34 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:32:30.262 13:16:34 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:32:30.262 13:16:34 -- dd/common.sh@31 -- # xtrace_disable 00:32:30.262 13:16:34 -- common/autotest_common.sh@10 -- # set +x 00:32:30.262 [2024-04-17 13:16:34.115202] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:30.262 [2024-04-17 13:16:34.115394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145567 ] 00:32:30.262 { 00:32:30.262 "subsystems": [ 00:32:30.262 { 00:32:30.262 "subsystem": "bdev", 00:32:30.262 "config": [ 00:32:30.262 { 00:32:30.262 "params": { 00:32:30.262 "block_size": 4096, 00:32:30.262 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:30.262 "name": "aio1" 00:32:30.262 }, 00:32:30.262 "method": "bdev_aio_create" 00:32:30.262 }, 00:32:30.262 { 00:32:30.262 "params": { 00:32:30.262 "trtype": "pcie", 00:32:30.262 "traddr": "0000:00:10.0", 00:32:30.262 "name": "Nvme0" 00:32:30.262 }, 00:32:30.262 "method": "bdev_nvme_attach_controller" 00:32:30.262 }, 00:32:30.262 { 00:32:30.262 "method": "bdev_wait_for_examine" 00:32:30.262 } 00:32:30.262 ] 00:32:30.262 } 00:32:30.262 ] 00:32:30.262 } 00:32:30.262 [2024-04-17 13:16:34.284670] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.520 [2024-04-17 13:16:34.495612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.152  Copying: 1024/1024 [kB] (average 1000 MBps) 00:32:32.152 00:32:32.152 13:16:36 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:32:32.152 13:16:36 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:32:32.152 00:32:32.152 real 0m8.517s 00:32:32.152 user 0m6.598s 00:32:32.152 sys 0m1.127s 00:32:32.152 13:16:36 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:32.152 ************************************ 00:32:32.152 END TEST dd_offset_magic 00:32:32.152 ************************************ 00:32:32.152 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:32:32.152 13:16:36 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:32:32.152 13:16:36 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:32:32.152 13:16:36 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:32:32.152 13:16:36 -- dd/common.sh@11 -- # local nvme_ref= 00:32:32.152 13:16:36 -- dd/common.sh@12 -- # local size=4194330 00:32:32.152 13:16:36 -- dd/common.sh@14 -- # local bs=1048576 00:32:32.152 13:16:36 -- dd/common.sh@15 -- # local count=5 00:32:32.152 13:16:36 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:32:32.152 13:16:36 -- dd/common.sh@18 -- # gen_conf 00:32:32.152 13:16:36 -- dd/common.sh@31 -- # xtrace_disable 00:32:32.152 13:16:36 -- common/autotest_common.sh@10 -- # set +x 00:32:32.152 { 00:32:32.152 "subsystems": [ 00:32:32.152 { 00:32:32.152 "subsystem": "bdev", 00:32:32.152 "config": [ 00:32:32.152 { 00:32:32.152 "params": { 00:32:32.152 "block_size": 4096, 00:32:32.152 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:32.152 "name": "aio1" 00:32:32.152 }, 00:32:32.152 "method": "bdev_aio_create" 00:32:32.152 }, 00:32:32.152 { 00:32:32.152 "params": { 00:32:32.152 "trtype": "pcie", 00:32:32.152 "traddr": "0000:00:10.0", 00:32:32.152 "name": "Nvme0" 00:32:32.152 }, 00:32:32.152 "method": "bdev_nvme_attach_controller" 00:32:32.152 }, 00:32:32.152 { 00:32:32.152 "method": "bdev_wait_for_examine" 00:32:32.152 } 00:32:32.152 ] 00:32:32.152 } 00:32:32.152 ] 00:32:32.152 } 00:32:32.152 [2024-04-17 13:16:36.201451] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:32.152 [2024-04-17 13:16:36.202430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145632 ] 00:32:32.416 [2024-04-17 13:16:36.384868] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.676 [2024-04-17 13:16:36.596594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.307  Copying: 5120/5120 [kB] (average 1250 MBps) 00:32:34.307 00:32:34.307 13:16:38 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:32:34.307 13:16:38 -- dd/common.sh@10 -- # local bdev=aio1 00:32:34.307 13:16:38 -- dd/common.sh@11 -- # local nvme_ref= 00:32:34.307 13:16:38 -- dd/common.sh@12 -- # local size=4194330 00:32:34.307 13:16:38 -- dd/common.sh@14 -- # local bs=1048576 00:32:34.307 13:16:38 -- dd/common.sh@15 -- # local count=5 00:32:34.307 13:16:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:32:34.307 13:16:38 -- dd/common.sh@18 -- # gen_conf 00:32:34.307 13:16:38 -- dd/common.sh@31 -- # xtrace_disable 00:32:34.307 13:16:38 -- common/autotest_common.sh@10 -- # set +x 00:32:34.307 [2024-04-17 13:16:38.130462] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:34.307 [2024-04-17 13:16:38.130669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145662 ] 00:32:34.307 { 00:32:34.307 "subsystems": [ 00:32:34.307 { 00:32:34.307 "subsystem": "bdev", 00:32:34.307 "config": [ 00:32:34.307 { 00:32:34.307 "params": { 00:32:34.307 "block_size": 4096, 00:32:34.307 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:34.307 "name": "aio1" 00:32:34.307 }, 00:32:34.307 "method": "bdev_aio_create" 00:32:34.307 }, 00:32:34.307 { 00:32:34.308 "params": { 00:32:34.308 "trtype": "pcie", 00:32:34.308 "traddr": "0000:00:10.0", 00:32:34.308 "name": "Nvme0" 00:32:34.308 }, 00:32:34.308 "method": "bdev_nvme_attach_controller" 00:32:34.308 }, 00:32:34.308 { 00:32:34.308 "method": "bdev_wait_for_examine" 00:32:34.308 } 00:32:34.308 ] 00:32:34.308 } 00:32:34.308 ] 00:32:34.308 } 00:32:34.308 [2024-04-17 13:16:38.300958] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.565 [2024-04-17 13:16:38.501217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.200  Copying: 5120/5120 [kB] (average 384 MBps) 00:32:36.200 00:32:36.200 13:16:40 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:32:36.200 00:32:36.200 real 0m20.436s 00:32:36.200 user 0m16.151s 00:32:36.200 sys 0m2.871s 00:32:36.200 13:16:40 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:36.200 13:16:40 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 ************************************ 00:32:36.200 END TEST spdk_dd_bdev_to_bdev 00:32:36.200 ************************************ 00:32:36.200 13:16:40 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:32:36.200 13:16:40 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:32:36.200 13:16:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:36.200 13:16:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:36.200 13:16:40 -- common/autotest_common.sh@10 -- # set +x 00:32:36.200 ************************************ 00:32:36.200 START TEST spdk_dd_sparse 00:32:36.200 ************************************ 00:32:36.200 13:16:40 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:32:36.459 * Looking for test storage... 00:32:36.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:36.459 13:16:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:36.459 13:16:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.459 13:16:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.459 13:16:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.459 13:16:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:36.459 13:16:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:36.460 13:16:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:36.460 13:16:40 -- paths/export.sh@5 -- # export PATH 00:32:36.460 13:16:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:36.460 13:16:40 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:32:36.460 13:16:40 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:32:36.460 13:16:40 -- dd/sparse.sh@110 -- # file1=file_zero1 00:32:36.460 13:16:40 -- dd/sparse.sh@111 -- # file2=file_zero2 00:32:36.460 13:16:40 -- dd/sparse.sh@112 -- # file3=file_zero3 00:32:36.460 13:16:40 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:32:36.460 13:16:40 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:32:36.460 13:16:40 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:32:36.460 13:16:40 -- dd/sparse.sh@118 -- # prepare 00:32:36.460 13:16:40 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:32:36.460 13:16:40 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:32:36.460 1+0 records in 00:32:36.460 1+0 records out 00:32:36.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00930051 s, 451 MB/s 00:32:36.460 13:16:40 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:32:36.460 1+0 records in 00:32:36.460 1+0 records out 00:32:36.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00707332 s, 593 MB/s 00:32:36.460 13:16:40 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:32:36.460 1+0 records in 00:32:36.460 1+0 records out 00:32:36.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00777565 s, 539 MB/s 00:32:36.460 13:16:40 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:32:36.460 13:16:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:36.460 13:16:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:36.460 13:16:40 -- common/autotest_common.sh@10 -- # set +x 00:32:36.460 ************************************ 00:32:36.460 START TEST dd_sparse_file_to_file 00:32:36.460 ************************************ 00:32:36.460 13:16:40 -- common/autotest_common.sh@1099 -- # file_to_file 00:32:36.460 13:16:40 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:32:36.460 13:16:40 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:32:36.460 13:16:40 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:32:36.460 13:16:40 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:32:36.460 13:16:40 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:32:36.460 13:16:40 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:32:36.460 13:16:40 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:32:36.460 13:16:40 -- dd/sparse.sh@41 -- # gen_conf 00:32:36.460 13:16:40 -- dd/common.sh@31 -- # xtrace_disable 00:32:36.460 13:16:40 -- common/autotest_common.sh@10 -- # set +x 00:32:36.460 { 00:32:36.460 "subsystems": [ 00:32:36.460 { 00:32:36.460 "subsystem": "bdev", 00:32:36.460 "config": [ 00:32:36.460 { 00:32:36.460 "params": { 00:32:36.460 "block_size": 4096, 00:32:36.460 "filename": "dd_sparse_aio_disk", 00:32:36.460 "name": "dd_aio" 00:32:36.460 }, 00:32:36.460 "method": "bdev_aio_create" 00:32:36.460 }, 00:32:36.460 { 00:32:36.460 "params": { 00:32:36.460 "lvs_name": "dd_lvstore", 00:32:36.460 "bdev_name": "dd_aio" 00:32:36.460 }, 00:32:36.460 "method": "bdev_lvol_create_lvstore" 00:32:36.460 }, 00:32:36.460 { 00:32:36.460 "method": "bdev_wait_for_examine" 00:32:36.460 } 00:32:36.460 ] 00:32:36.460 } 00:32:36.460 ] 00:32:36.460 } 00:32:36.460 [2024-04-17 13:16:40.499271] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:36.460 [2024-04-17 13:16:40.500086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145762 ] 00:32:36.719 [2024-04-17 13:16:40.677034] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.977 [2024-04-17 13:16:40.891388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.613  Copying: 12/36 [MB] (average 923 MBps) 00:32:38.613 00:32:38.613 13:16:42 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:32:38.613 13:16:42 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:32:38.613 13:16:42 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:32:38.613 13:16:42 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:32:38.613 13:16:42 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:32:38.613 13:16:42 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:32:38.613 13:16:42 -- dd/sparse.sh@52 -- # stat1_b=24576 00:32:38.613 13:16:42 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:32:38.613 13:16:42 -- dd/sparse.sh@53 -- # stat2_b=24576 00:32:38.613 13:16:42 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:32:38.613 00:32:38.613 real 0m2.199s 00:32:38.613 user 0m1.771s 00:32:38.613 sys 0m0.292s 00:32:38.613 13:16:42 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:38.613 ************************************ 00:32:38.613 END TEST dd_sparse_file_to_file 00:32:38.613 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.613 ************************************ 00:32:38.613 13:16:42 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:32:38.613 13:16:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:38.613 13:16:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:38.613 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.613 ************************************ 00:32:38.613 START TEST dd_sparse_file_to_bdev 00:32:38.613 ************************************ 00:32:38.613 13:16:42 -- common/autotest_common.sh@1099 -- # file_to_bdev 00:32:38.613 13:16:42 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:32:38.613 13:16:42 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:32:38.613 13:16:42 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:32:38.613 13:16:42 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:32:38.613 13:16:42 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:32:38.613 13:16:42 -- dd/sparse.sh@73 -- # gen_conf 00:32:38.613 13:16:42 -- dd/common.sh@31 -- # xtrace_disable 00:32:38.613 13:16:42 -- common/autotest_common.sh@10 -- # set +x 00:32:38.871 { 00:32:38.871 "subsystems": [ 00:32:38.871 { 00:32:38.871 "subsystem": "bdev", 00:32:38.871 "config": [ 00:32:38.871 { 00:32:38.872 "params": { 00:32:38.872 "block_size": 4096, 00:32:38.872 "filename": "dd_sparse_aio_disk", 00:32:38.872 "name": "dd_aio" 00:32:38.872 }, 00:32:38.872 "method": "bdev_aio_create" 00:32:38.872 }, 00:32:38.872 { 00:32:38.872 "params": { 00:32:38.872 "lvs_name": "dd_lvstore", 00:32:38.872 "thin_provision": true, 00:32:38.872 "lvol_name": "dd_lvol", 00:32:38.872 "size": 37748736 00:32:38.872 }, 00:32:38.872 "method": "bdev_lvol_create" 00:32:38.872 }, 00:32:38.872 { 00:32:38.872 "method": "bdev_wait_for_examine" 00:32:38.872 } 00:32:38.872 ] 00:32:38.872 } 00:32:38.872 ] 00:32:38.872 } 00:32:38.872 [2024-04-17 13:16:42.778974] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:38.872 [2024-04-17 13:16:42.779155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145825 ] 00:32:38.872 [2024-04-17 13:16:42.949985] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.130 [2024-04-17 13:16:43.164336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.388 [2024-04-17 13:16:43.490910] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:32:39.646  Copying: 12/36 [MB] (average 545 MBps)[2024-04-17 13:16:43.549296] app.c: 930:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:32:41.024 00:32:41.024 00:32:41.024 00:32:41.024 real 0m2.120s 00:32:41.024 user 0m1.768s 00:32:41.024 sys 0m0.250s 00:32:41.024 13:16:44 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:41.024 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:32:41.024 ************************************ 00:32:41.024 END TEST dd_sparse_file_to_bdev 00:32:41.024 ************************************ 00:32:41.024 13:16:44 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:32:41.024 13:16:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:41.024 13:16:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:41.024 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:32:41.024 ************************************ 00:32:41.024 START TEST dd_sparse_bdev_to_file 00:32:41.024 ************************************ 00:32:41.024 13:16:44 -- common/autotest_common.sh@1099 -- # bdev_to_file 00:32:41.024 13:16:44 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:32:41.024 13:16:44 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:32:41.024 13:16:44 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:32:41.024 13:16:44 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:32:41.024 13:16:44 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:32:41.024 13:16:44 -- dd/sparse.sh@91 -- # gen_conf 00:32:41.024 13:16:44 -- dd/common.sh@31 -- # xtrace_disable 00:32:41.024 13:16:44 -- common/autotest_common.sh@10 -- # set +x 00:32:41.024 [2024-04-17 13:16:44.967111] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:41.024 [2024-04-17 13:16:44.967274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145894 ] 00:32:41.024 { 00:32:41.024 "subsystems": [ 00:32:41.024 { 00:32:41.024 "subsystem": "bdev", 00:32:41.024 "config": [ 00:32:41.024 { 00:32:41.024 "params": { 00:32:41.024 "block_size": 4096, 00:32:41.024 "filename": "dd_sparse_aio_disk", 00:32:41.024 "name": "dd_aio" 00:32:41.024 }, 00:32:41.024 "method": "bdev_aio_create" 00:32:41.024 }, 00:32:41.024 { 00:32:41.024 "method": "bdev_wait_for_examine" 00:32:41.024 } 00:32:41.024 ] 00:32:41.024 } 00:32:41.024 ] 00:32:41.024 } 00:32:41.024 [2024-04-17 13:16:45.129382] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.283 [2024-04-17 13:16:45.346759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.227  Copying: 12/36 [MB] (average 923 MBps) 00:32:43.227 00:32:43.227 13:16:47 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:32:43.227 13:16:47 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:32:43.227 13:16:47 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:32:43.227 13:16:47 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:32:43.227 13:16:47 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:32:43.227 13:16:47 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:32:43.227 13:16:47 -- dd/sparse.sh@102 -- # stat2_b=24576 00:32:43.227 13:16:47 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:32:43.227 13:16:47 -- dd/sparse.sh@103 -- # stat3_b=24576 00:32:43.227 13:16:47 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:32:43.227 00:32:43.227 real 0m2.136s 00:32:43.227 user 0m1.752s 00:32:43.227 sys 0m0.284s 00:32:43.227 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:43.227 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.227 ************************************ 00:32:43.227 END TEST dd_sparse_bdev_to_file 00:32:43.227 ************************************ 00:32:43.227 13:16:47 -- dd/sparse.sh@1 -- # cleanup 00:32:43.227 13:16:47 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:32:43.227 13:16:47 -- dd/sparse.sh@12 -- # rm file_zero1 00:32:43.227 13:16:47 -- dd/sparse.sh@13 -- # rm file_zero2 00:32:43.227 13:16:47 -- dd/sparse.sh@14 -- # rm file_zero3 00:32:43.227 00:32:43.227 real 0m6.834s 00:32:43.227 user 0m5.489s 00:32:43.227 sys 0m0.998s 00:32:43.227 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:43.227 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.227 ************************************ 00:32:43.227 END TEST spdk_dd_sparse 00:32:43.227 ************************************ 00:32:43.227 13:16:47 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:32:43.227 13:16:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:43.227 13:16:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:43.227 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.227 ************************************ 00:32:43.227 START TEST spdk_dd_negative 00:32:43.227 ************************************ 00:32:43.227 13:16:47 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:32:43.227 * Looking for test storage... 00:32:43.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:43.227 13:16:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:43.227 13:16:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.227 13:16:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.227 13:16:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.228 13:16:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:43.228 13:16:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:43.228 13:16:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:43.228 13:16:47 -- paths/export.sh@5 -- # export PATH 00:32:43.228 13:16:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:43.228 13:16:47 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:43.228 13:16:47 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:43.228 13:16:47 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:43.228 13:16:47 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:43.228 13:16:47 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:32:43.228 13:16:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:43.228 13:16:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:43.228 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.228 ************************************ 00:32:43.228 START TEST dd_invalid_arguments 00:32:43.228 ************************************ 00:32:43.228 13:16:47 -- common/autotest_common.sh@1099 -- # invalid_arguments 00:32:43.228 13:16:47 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:43.228 13:16:47 -- common/autotest_common.sh@638 -- # local es=0 00:32:43.228 13:16:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:43.228 13:16:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.228 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.228 13:16:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.228 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.228 13:16:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.228 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.228 13:16:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.228 13:16:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:43.228 13:16:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:32:43.487 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:32:43.487 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:32:43.487 options: 00:32:43.487 -c, --config JSON config file 00:32:43.487 --json JSON config file 00:32:43.487 --json-ignore-init-errors 00:32:43.487 don't exit on invalid config entry 00:32:43.487 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:32:43.487 -g, --single-file-segments 00:32:43.487 force creating just one hugetlbfs file 00:32:43.487 -h, --help show this usage 00:32:43.487 -i, --shm-id shared memory ID (optional) 00:32:43.487 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:32:43.487 --lcores lcore to CPU mapping list. The list is in the format: 00:32:43.487 [<,lcores[@CPUs]>...] 00:32:43.487 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:32:43.487 Within the group, '-' is used for range separator, 00:32:43.487 ',' is used for single number separator. 00:32:43.487 '( )' can be omitted for single element group, 00:32:43.487 '@' can be omitted if cpus and lcores have the same value 00:32:43.487 -n, --mem-channels channel number of memory channels used for DPDK 00:32:43.487 -p, --main-core main (primary) core for DPDK 00:32:43.487 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:32:43.487 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:32:43.487 --disable-cpumask-locks Disable CPU core lock files. 00:32:43.487 --silence-noticelog disable notice level logging to stderr 00:32:43.487 --msg-mempool-size global message memory pool size in count (default: 262143) 00:32:43.487 -u, --no-pci disable PCI access 00:32:43.487 --wait-for-rpc wait for RPCs to initialize subsystems 00:32:43.487 --max-delay maximum reactor delay (in microseconds) 00:32:43.487 -B, --pci-blocked pci addr to block (can be used more than once) 00:32:43.487 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:32:43.487 -R, --huge-unlink unlink huge files after initialization 00:32:43.487 -v, --version print SPDK version 00:32:43.487 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:32:43.487 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:32:43.487 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:32:43.487 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:32:43.487 Tracepoints vary in size and can use more than one trace entry. 00:32:43.487 --rpcs-allowed comma-separated list of permitted RPCS 00:32:43.487 --env-context Opaque context for use of the env implementation 00:32:43.487 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:32:43.487 --no-huge run without using hugepages 00:32:43.487 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:32:43.487 -e, --tpoint-group [:] 00:32:43.487 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all) 00:32:43.487 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:32:43.487 Groups and masks can be combine[2024-04-17 13:16:47.396906] spdk_dd.c:1479:main: *ERROR*: Invalid arguments 00:32:43.487 d (e.g. thread,bdev:0x1). 00:32:43.487 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:32:43.487 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:32:43.487 [--------- DD Options ---------] 00:32:43.487 --if Input file. Must specify either --if or --ib. 00:32:43.487 --ib Input bdev. Must specifier either --if or --ib 00:32:43.487 --of Output file. Must specify either --of or --ob. 00:32:43.487 --ob Output bdev. Must specify either --of or --ob. 00:32:43.487 --iflag Input file flags. 00:32:43.487 --oflag Output file flags. 00:32:43.487 --bs I/O unit size (default: 4096) 00:32:43.487 --qd Queue depth (default: 2) 00:32:43.487 --count I/O unit count. The number of I/O units to copy. (default: all) 00:32:43.487 --skip Skip this many I/O units at start of input. (default: 0) 00:32:43.487 --seek Skip this many I/O units at start of output. (default: 0) 00:32:43.487 --aio Force usage of AIO. (by default io_uring is used if available) 00:32:43.487 --sparse Enable hole skipping in input target 00:32:43.487 Available iflag and oflag values: 00:32:43.488 append - append mode 00:32:43.488 direct - use direct I/O for data 00:32:43.488 directory - fail unless a directory 00:32:43.488 dsync - use synchronized I/O for data 00:32:43.488 noatime - do not update access time 00:32:43.488 noctty - do not assign controlling terminal from file 00:32:43.488 nofollow - do not follow symlinks 00:32:43.488 nonblock - use non-blocking I/O 00:32:43.488 sync - use synchronized I/O for data and metadata 00:32:43.488 13:16:47 -- common/autotest_common.sh@641 -- # es=2 00:32:43.488 13:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:43.488 13:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:43.488 13:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:43.488 00:32:43.488 real 0m0.115s 00:32:43.488 user 0m0.049s 00:32:43.488 sys 0m0.066s 00:32:43.488 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:43.488 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.488 ************************************ 00:32:43.488 END TEST dd_invalid_arguments 00:32:43.488 ************************************ 00:32:43.488 13:16:47 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:32:43.488 13:16:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:43.488 13:16:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:43.488 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.488 ************************************ 00:32:43.488 START TEST dd_double_input 00:32:43.488 ************************************ 00:32:43.488 13:16:47 -- common/autotest_common.sh@1099 -- # double_input 00:32:43.488 13:16:47 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:43.488 13:16:47 -- common/autotest_common.sh@638 -- # local es=0 00:32:43.488 13:16:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:43.488 13:16:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.488 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.488 13:16:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.488 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.488 13:16:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.488 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.488 13:16:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.488 13:16:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:43.488 13:16:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:32:43.488 [2024-04-17 13:16:47.573357] spdk_dd.c:1486:main: *ERROR*: You may specify either --if or --ib, but not both. 00:32:43.488 13:16:47 -- common/autotest_common.sh@641 -- # es=22 00:32:43.488 13:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:43.488 ************************************ 00:32:43.488 END TEST dd_double_input 00:32:43.488 ************************************ 00:32:43.488 13:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:43.488 13:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:43.488 00:32:43.488 real 0m0.101s 00:32:43.488 user 0m0.061s 00:32:43.488 sys 0m0.041s 00:32:43.488 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:43.488 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.747 13:16:47 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:32:43.747 13:16:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:43.747 13:16:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:43.747 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.747 ************************************ 00:32:43.747 START TEST dd_double_output 00:32:43.747 ************************************ 00:32:43.747 13:16:47 -- common/autotest_common.sh@1099 -- # double_output 00:32:43.747 13:16:47 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:43.747 13:16:47 -- common/autotest_common.sh@638 -- # local es=0 00:32:43.747 13:16:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:43.747 13:16:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.747 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.747 13:16:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.747 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.747 13:16:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.747 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.747 13:16:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.747 13:16:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:43.747 13:16:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:32:43.747 [2024-04-17 13:16:47.745049] spdk_dd.c:1492:main: *ERROR*: You may specify either --of or --ob, but not both. 00:32:43.747 13:16:47 -- common/autotest_common.sh@641 -- # es=22 00:32:43.747 13:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:43.747 ************************************ 00:32:43.747 END TEST dd_double_output 00:32:43.747 ************************************ 00:32:43.747 13:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:43.747 13:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:43.747 00:32:43.747 real 0m0.096s 00:32:43.747 user 0m0.051s 00:32:43.747 sys 0m0.046s 00:32:43.747 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:43.747 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.747 13:16:47 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:32:43.747 13:16:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:43.747 13:16:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:43.747 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:43.747 ************************************ 00:32:43.747 START TEST dd_no_input 00:32:43.747 ************************************ 00:32:43.747 13:16:47 -- common/autotest_common.sh@1099 -- # no_input 00:32:43.747 13:16:47 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:43.747 13:16:47 -- common/autotest_common.sh@638 -- # local es=0 00:32:43.747 13:16:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:43.748 13:16:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.748 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.748 13:16:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.748 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.748 13:16:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.748 13:16:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:43.748 13:16:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:43.748 13:16:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:43.748 13:16:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:32:44.007 [2024-04-17 13:16:47.932572] spdk_dd.c:1498:main: *ERROR*: You must specify either --if or --ib 00:32:44.007 13:16:47 -- common/autotest_common.sh@641 -- # es=22 00:32:44.007 13:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:44.007 13:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:44.007 13:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:44.007 ************************************ 00:32:44.007 END TEST dd_no_input 00:32:44.007 ************************************ 00:32:44.007 00:32:44.007 real 0m0.110s 00:32:44.007 user 0m0.036s 00:32:44.007 sys 0m0.074s 00:32:44.007 13:16:47 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:44.007 13:16:47 -- common/autotest_common.sh@10 -- # set +x 00:32:44.007 13:16:48 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:32:44.007 13:16:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:44.007 13:16:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:44.007 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:44.007 ************************************ 00:32:44.007 START TEST dd_no_output 00:32:44.007 ************************************ 00:32:44.007 13:16:48 -- common/autotest_common.sh@1099 -- # no_output 00:32:44.007 13:16:48 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:44.007 13:16:48 -- common/autotest_common.sh@638 -- # local es=0 00:32:44.007 13:16:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:44.007 13:16:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.007 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.007 13:16:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.007 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.007 13:16:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.007 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.007 13:16:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.007 13:16:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:44.007 13:16:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:44.007 [2024-04-17 13:16:48.112435] spdk_dd.c:1504:main: *ERROR*: You must specify either --of or --ob 00:32:44.267 13:16:48 -- common/autotest_common.sh@641 -- # es=22 00:32:44.267 13:16:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:44.267 13:16:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:44.267 ************************************ 00:32:44.267 END TEST dd_no_output 00:32:44.267 ************************************ 00:32:44.267 13:16:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:44.267 00:32:44.267 real 0m0.114s 00:32:44.267 user 0m0.072s 00:32:44.267 sys 0m0.042s 00:32:44.267 13:16:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:44.267 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:44.267 13:16:48 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:32:44.267 13:16:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:44.267 13:16:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:44.267 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:44.267 ************************************ 00:32:44.267 START TEST dd_wrong_blocksize 00:32:44.267 ************************************ 00:32:44.267 13:16:48 -- common/autotest_common.sh@1099 -- # wrong_blocksize 00:32:44.267 13:16:48 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:44.267 13:16:48 -- common/autotest_common.sh@638 -- # local es=0 00:32:44.267 13:16:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:44.267 13:16:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.267 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.267 13:16:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.267 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.267 13:16:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.267 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.267 13:16:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.267 13:16:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:44.267 13:16:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:32:44.267 [2024-04-17 13:16:48.319222] spdk_dd.c:1510:main: *ERROR*: Invalid --bs value 00:32:44.267 13:16:48 -- common/autotest_common.sh@641 -- # es=22 00:32:44.267 13:16:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:44.267 13:16:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:44.267 ************************************ 00:32:44.267 END TEST dd_wrong_blocksize 00:32:44.267 ************************************ 00:32:44.267 13:16:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:44.267 00:32:44.267 real 0m0.123s 00:32:44.267 user 0m0.070s 00:32:44.267 sys 0m0.053s 00:32:44.267 13:16:48 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:44.267 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:44.267 13:16:48 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:32:44.267 13:16:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:44.267 13:16:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:44.267 13:16:48 -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 ************************************ 00:32:44.527 START TEST dd_smaller_blocksize 00:32:44.527 ************************************ 00:32:44.527 13:16:48 -- common/autotest_common.sh@1099 -- # smaller_blocksize 00:32:44.527 13:16:48 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:44.527 13:16:48 -- common/autotest_common.sh@638 -- # local es=0 00:32:44.527 13:16:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:44.527 13:16:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.527 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.527 13:16:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.527 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.527 13:16:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.527 13:16:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:44.527 13:16:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:44.527 13:16:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:44.527 13:16:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:32:44.527 [2024-04-17 13:16:48.527421] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:44.527 [2024-04-17 13:16:48.527702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146217 ] 00:32:44.786 [2024-04-17 13:16:48.707298] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.045 [2024-04-17 13:16:48.973700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.613 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:32:45.613 [2024-04-17 13:16:49.654633] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:32:45.613 [2024-04-17 13:16:49.654786] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:46.568 [2024-04-17 13:16:50.398161] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:32:46.829 13:16:50 -- common/autotest_common.sh@641 -- # es=244 00:32:46.829 13:16:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:46.829 13:16:50 -- common/autotest_common.sh@650 -- # es=116 00:32:46.829 13:16:50 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:46.829 13:16:50 -- common/autotest_common.sh@658 -- # es=1 00:32:46.829 13:16:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:46.829 00:32:46.829 real 0m2.346s 00:32:46.829 user 0m1.697s 00:32:46.829 sys 0m0.549s 00:32:46.829 ************************************ 00:32:46.829 END TEST dd_smaller_blocksize 00:32:46.829 ************************************ 00:32:46.829 13:16:50 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:46.829 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:32:46.829 13:16:50 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:32:46.829 13:16:50 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:46.829 13:16:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:46.829 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:32:46.829 ************************************ 00:32:46.829 START TEST dd_invalid_count 00:32:46.829 ************************************ 00:32:46.829 13:16:50 -- common/autotest_common.sh@1099 -- # invalid_count 00:32:46.829 13:16:50 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:46.829 13:16:50 -- common/autotest_common.sh@638 -- # local es=0 00:32:46.829 13:16:50 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:46.829 13:16:50 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:46.829 13:16:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:46.829 13:16:50 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:46.829 13:16:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:46.829 13:16:50 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:46.829 13:16:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:46.829 13:16:50 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:46.829 13:16:50 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:46.829 13:16:50 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:32:46.829 [2024-04-17 13:16:50.940142] spdk_dd.c:1516:main: *ERROR*: Invalid --count value 00:32:47.090 13:16:50 -- common/autotest_common.sh@641 -- # es=22 00:32:47.090 13:16:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:47.090 13:16:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:47.090 ************************************ 00:32:47.090 13:16:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:47.090 00:32:47.090 real 0m0.113s 00:32:47.090 user 0m0.071s 00:32:47.090 sys 0m0.042s 00:32:47.090 13:16:50 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:47.090 13:16:50 -- common/autotest_common.sh@10 -- # set +x 00:32:47.090 END TEST dd_invalid_count 00:32:47.090 ************************************ 00:32:47.090 13:16:51 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:32:47.090 13:16:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:47.090 13:16:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:47.090 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:47.090 ************************************ 00:32:47.090 START TEST dd_invalid_oflag 00:32:47.090 ************************************ 00:32:47.090 13:16:51 -- common/autotest_common.sh@1099 -- # invalid_oflag 00:32:47.090 13:16:51 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:47.090 13:16:51 -- common/autotest_common.sh@638 -- # local es=0 00:32:47.090 13:16:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:47.090 13:16:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.090 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.091 13:16:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.091 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.091 13:16:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.091 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.091 13:16:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.091 13:16:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:47.091 13:16:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:32:47.091 [2024-04-17 13:16:51.118732] spdk_dd.c:1522:main: *ERROR*: --oflags may be used only with --of 00:32:47.091 ************************************ 00:32:47.091 END TEST dd_invalid_oflag 00:32:47.091 ************************************ 00:32:47.091 13:16:51 -- common/autotest_common.sh@641 -- # es=22 00:32:47.091 13:16:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:47.091 13:16:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:47.091 13:16:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:47.091 00:32:47.091 real 0m0.106s 00:32:47.091 user 0m0.083s 00:32:47.091 sys 0m0.024s 00:32:47.091 13:16:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:47.091 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:47.091 13:16:51 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:32:47.091 13:16:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:47.091 13:16:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:47.091 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:47.350 ************************************ 00:32:47.350 START TEST dd_invalid_iflag 00:32:47.350 ************************************ 00:32:47.350 13:16:51 -- common/autotest_common.sh@1099 -- # invalid_iflag 00:32:47.350 13:16:51 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:47.350 13:16:51 -- common/autotest_common.sh@638 -- # local es=0 00:32:47.350 13:16:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:47.350 13:16:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:47.350 13:16:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:32:47.350 [2024-04-17 13:16:51.313609] spdk_dd.c:1528:main: *ERROR*: --iflags may be used only with --if 00:32:47.350 13:16:51 -- common/autotest_common.sh@641 -- # es=22 00:32:47.350 13:16:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:47.350 13:16:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:47.350 13:16:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:47.350 00:32:47.350 real 0m0.117s 00:32:47.350 user 0m0.058s 00:32:47.350 sys 0m0.057s 00:32:47.350 13:16:51 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:47.350 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:47.350 ************************************ 00:32:47.350 END TEST dd_invalid_iflag 00:32:47.350 ************************************ 00:32:47.350 13:16:51 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:32:47.350 13:16:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:47.350 13:16:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:47.350 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:32:47.350 ************************************ 00:32:47.350 START TEST dd_unknown_flag 00:32:47.350 ************************************ 00:32:47.350 13:16:51 -- common/autotest_common.sh@1099 -- # unknown_flag 00:32:47.350 13:16:51 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:47.350 13:16:51 -- common/autotest_common.sh@638 -- # local es=0 00:32:47.350 13:16:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:47.350 13:16:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:47.350 13:16:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:47.350 13:16:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:32:47.350 [2024-04-17 13:16:51.495140] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:47.350 [2024-04-17 13:16:51.495484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146363 ] 00:32:47.609 [2024-04-17 13:16:51.663361] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.917 [2024-04-17 13:16:51.907655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.175 [2024-04-17 13:16:52.216799] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:32:48.175 [2024-04-17 13:16:52.217069] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:48.175  Copying: 0/0 [B] (average 0 Bps)[2024-04-17 13:16:52.217339] app.c: 946:app_stop: *NOTICE*: spdk_app_stop called twice 00:32:49.112 [2024-04-17 13:16:52.954868] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:32:49.371 00:32:49.371 00:32:49.371 ************************************ 00:32:49.371 END TEST dd_unknown_flag 00:32:49.371 ************************************ 00:32:49.371 13:16:53 -- common/autotest_common.sh@641 -- # es=234 00:32:49.371 13:16:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:49.371 13:16:53 -- common/autotest_common.sh@650 -- # es=106 00:32:49.371 13:16:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:49.371 13:16:53 -- common/autotest_common.sh@658 -- # es=1 00:32:49.371 13:16:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:49.371 00:32:49.371 real 0m1.968s 00:32:49.371 user 0m1.606s 00:32:49.371 sys 0m0.230s 00:32:49.371 13:16:53 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:49.371 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:32:49.371 13:16:53 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:32:49.371 13:16:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:32:49.371 13:16:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:49.371 13:16:53 -- common/autotest_common.sh@10 -- # set +x 00:32:49.371 ************************************ 00:32:49.371 START TEST dd_invalid_json 00:32:49.371 ************************************ 00:32:49.371 13:16:53 -- common/autotest_common.sh@1099 -- # invalid_json 00:32:49.371 13:16:53 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:49.371 13:16:53 -- dd/negative_dd.sh@95 -- # : 00:32:49.371 13:16:53 -- common/autotest_common.sh@638 -- # local es=0 00:32:49.371 13:16:53 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:49.371 13:16:53 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.371 13:16:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:49.371 13:16:53 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.371 13:16:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:49.372 13:16:53 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.372 13:16:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:49.372 13:16:53 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.372 13:16:53 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:49.372 13:16:53 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:32:49.631 [2024-04-17 13:16:53.552986] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:49.631 [2024-04-17 13:16:53.553338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146421 ] 00:32:49.631 [2024-04-17 13:16:53.722101] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.890 [2024-04-17 13:16:53.932406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.890 [2024-04-17 13:16:53.932750] json_config.c: 509:parse_json: *ERROR*: JSON data cannot be empty 00:32:49.890 [2024-04-17 13:16:53.932898] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:49.890 [2024-04-17 13:16:53.933025] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:49.890 [2024-04-17 13:16:53.933149] spdk_dd.c:1535:main: *ERROR*: Error occurred while performing copy 00:32:50.457 13:16:54 -- common/autotest_common.sh@641 -- # es=234 00:32:50.457 13:16:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:50.457 13:16:54 -- common/autotest_common.sh@650 -- # es=106 00:32:50.457 13:16:54 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:50.457 13:16:54 -- common/autotest_common.sh@658 -- # es=1 00:32:50.457 13:16:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:50.457 00:32:50.457 real 0m0.864s 00:32:50.457 user 0m0.636s 00:32:50.457 sys 0m0.125s 00:32:50.457 13:16:54 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:50.457 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.457 ************************************ 00:32:50.457 END TEST dd_invalid_json 00:32:50.457 ************************************ 00:32:50.457 ************************************ 00:32:50.457 END TEST spdk_dd_negative 00:32:50.457 ************************************ 00:32:50.457 00:32:50.457 real 0m7.187s 00:32:50.457 user 0m5.015s 00:32:50.458 sys 0m1.803s 00:32:50.458 13:16:54 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:50.458 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.458 ************************************ 00:32:50.458 END TEST spdk_dd 00:32:50.458 ************************************ 00:32:50.458 00:32:50.458 real 2m48.464s 00:32:50.458 user 2m16.088s 00:32:50.458 sys 0m22.361s 00:32:50.458 13:16:54 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:50.458 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.458 13:16:54 -- spdk/autotest.sh@206 -- # '[' 1 -eq 1 ']' 00:32:50.458 13:16:54 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:32:50.458 13:16:54 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:32:50.458 13:16:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:50.458 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.458 ************************************ 00:32:50.458 START TEST blockdev_nvme 00:32:50.458 ************************************ 00:32:50.458 13:16:54 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:32:50.458 * Looking for test storage... 00:32:50.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:32:50.458 13:16:54 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:32:50.458 13:16:54 -- bdev/nbd_common.sh@6 -- # set -e 00:32:50.458 13:16:54 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:50.458 13:16:54 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:50.458 13:16:54 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:32:50.458 13:16:54 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:32:50.458 13:16:54 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:50.458 13:16:54 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:50.458 13:16:54 -- bdev/blockdev.sh@20 -- # : 00:32:50.458 13:16:54 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:32:50.458 13:16:54 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:32:50.458 13:16:54 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:32:50.458 13:16:54 -- bdev/blockdev.sh@674 -- # uname -s 00:32:50.458 13:16:54 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:32:50.458 13:16:54 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:32:50.458 13:16:54 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:32:50.458 13:16:54 -- bdev/blockdev.sh@683 -- # crypto_device= 00:32:50.458 13:16:54 -- bdev/blockdev.sh@684 -- # dek= 00:32:50.458 13:16:54 -- bdev/blockdev.sh@685 -- # env_ctx= 00:32:50.458 13:16:54 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:32:50.458 13:16:54 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:32:50.458 13:16:54 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:32:50.458 13:16:54 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:32:50.458 13:16:54 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:32:50.458 13:16:54 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=146516 00:32:50.458 13:16:54 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:50.458 13:16:54 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:50.458 13:16:54 -- bdev/blockdev.sh@49 -- # waitforlisten 146516 00:32:50.458 13:16:54 -- common/autotest_common.sh@817 -- # '[' -z 146516 ']' 00:32:50.458 13:16:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.458 13:16:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:50.458 13:16:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.458 13:16:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:50.458 13:16:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.717 [2024-04-17 13:16:54.665033] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:50.717 [2024-04-17 13:16:54.665393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146516 ] 00:32:50.717 [2024-04-17 13:16:54.833817] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.976 [2024-04-17 13:16:55.053238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.912 13:16:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:51.912 13:16:55 -- common/autotest_common.sh@850 -- # return 0 00:32:51.912 13:16:55 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:32:51.912 13:16:55 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:32:51.912 13:16:55 -- bdev/blockdev.sh@81 -- # local json 00:32:51.912 13:16:55 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:32:51.912 13:16:55 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:51.912 13:16:55 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:32:51.912 13:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.912 13:16:55 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:32:51.912 13:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.912 13:16:55 -- bdev/blockdev.sh@740 -- # cat 00:32:51.912 13:16:55 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:32:51.912 13:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.912 13:16:55 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:32:51.912 13:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.912 13:16:55 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:51.912 13:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.912 13:16:56 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:32:51.912 13:16:56 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:32:51.912 13:16:56 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:32:51.912 13:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.912 13:16:56 -- common/autotest_common.sh@10 -- # set +x 00:32:51.912 13:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.171 13:16:56 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:32:52.171 13:16:56 -- bdev/blockdev.sh@749 -- # jq -r .name 00:32:52.171 13:16:56 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bb91742e-747c-4641-bdad-e484dfb225a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bb91742e-747c-4641-bdad-e484dfb225a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:32:52.171 13:16:56 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:32:52.171 13:16:56 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:32:52.171 13:16:56 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:32:52.171 13:16:56 -- bdev/blockdev.sh@754 -- # killprocess 146516 00:32:52.171 13:16:56 -- common/autotest_common.sh@924 -- # '[' -z 146516 ']' 00:32:52.171 13:16:56 -- common/autotest_common.sh@928 -- # kill -0 146516 00:32:52.171 13:16:56 -- common/autotest_common.sh@929 -- # uname 00:32:52.171 13:16:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:32:52.171 13:16:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 146516 00:32:52.171 13:16:56 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:52.171 killing process with pid 146516 00:32:52.171 13:16:56 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:52.171 13:16:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 146516' 00:32:52.171 13:16:56 -- common/autotest_common.sh@943 -- # kill 146516 00:32:52.171 13:16:56 -- common/autotest_common.sh@948 -- # wait 146516 00:32:54.703 13:16:58 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:54.703 13:16:58 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:32:54.703 13:16:58 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:32:54.703 13:16:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:54.703 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:32:54.703 ************************************ 00:32:54.703 START TEST bdev_hello_world 00:32:54.703 ************************************ 00:32:54.703 13:16:58 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:32:54.703 [2024-04-17 13:16:58.412431] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:54.703 [2024-04-17 13:16:58.412827] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146625 ] 00:32:54.703 [2024-04-17 13:16:58.581885] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.703 [2024-04-17 13:16:58.790317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.272 [2024-04-17 13:16:59.218701] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:55.272 [2024-04-17 13:16:59.219004] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:32:55.272 [2024-04-17 13:16:59.219078] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:55.272 [2024-04-17 13:16:59.222298] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:55.272 [2024-04-17 13:16:59.222935] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:55.272 [2024-04-17 13:16:59.223096] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:55.272 [2024-04-17 13:16:59.223346] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:55.272 00:32:55.272 [2024-04-17 13:16:59.223493] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:56.666 00:32:56.666 real 0m2.013s 00:32:56.666 user 0m1.716s 00:32:56.666 sys 0m0.196s 00:32:56.666 13:17:00 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:56.666 13:17:00 -- common/autotest_common.sh@10 -- # set +x 00:32:56.666 ************************************ 00:32:56.666 END TEST bdev_hello_world 00:32:56.666 ************************************ 00:32:56.666 13:17:00 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:32:56.667 13:17:00 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:32:56.667 13:17:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:56.667 13:17:00 -- common/autotest_common.sh@10 -- # set +x 00:32:56.667 ************************************ 00:32:56.667 START TEST bdev_bounds 00:32:56.667 ************************************ 00:32:56.667 Process bdevio pid: 146680 00:32:56.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.667 13:17:00 -- common/autotest_common.sh@1099 -- # bdev_bounds '' 00:32:56.667 13:17:00 -- bdev/blockdev.sh@290 -- # bdevio_pid=146680 00:32:56.667 13:17:00 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:56.667 13:17:00 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 146680' 00:32:56.667 13:17:00 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:56.667 13:17:00 -- bdev/blockdev.sh@293 -- # waitforlisten 146680 00:32:56.667 13:17:00 -- common/autotest_common.sh@817 -- # '[' -z 146680 ']' 00:32:56.667 13:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.667 13:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:56.667 13:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.667 13:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:56.667 13:17:00 -- common/autotest_common.sh@10 -- # set +x 00:32:56.667 [2024-04-17 13:17:00.509859] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:56.667 [2024-04-17 13:17:00.510279] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146680 ] 00:32:56.667 [2024-04-17 13:17:00.691383] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:56.926 [2024-04-17 13:17:00.904202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.926 [2024-04-17 13:17:00.904349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.926 [2024-04-17 13:17:00.904345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.493 13:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:57.493 13:17:01 -- common/autotest_common.sh@850 -- # return 0 00:32:57.493 13:17:01 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:57.493 I/O targets: 00:32:57.493 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:32:57.493 00:32:57.493 00:32:57.493 CUnit - A unit testing framework for C - Version 2.1-3 00:32:57.493 http://cunit.sourceforge.net/ 00:32:57.493 00:32:57.493 00:32:57.493 Suite: bdevio tests on: Nvme0n1 00:32:57.493 Test: blockdev write read block ...passed 00:32:57.493 Test: blockdev write zeroes read block ...passed 00:32:57.493 Test: blockdev write zeroes read no split ...passed 00:32:57.493 Test: blockdev write zeroes read split ...passed 00:32:57.493 Test: blockdev write zeroes read split partial ...passed 00:32:57.493 Test: blockdev reset ...[2024-04-17 13:17:01.628460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:32:57.493 [2024-04-17 13:17:01.631944] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:57.493 passed 00:32:57.493 Test: blockdev write read 8 blocks ...passed 00:32:57.493 Test: blockdev write read size > 128k ...passed 00:32:57.493 Test: blockdev write read invalid size ...passed 00:32:57.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:57.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:57.493 Test: blockdev write read max offset ...passed 00:32:57.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:57.493 Test: blockdev writev readv 8 blocks ...passed 00:32:57.493 Test: blockdev writev readv 30 x 1block ...passed 00:32:57.493 Test: blockdev writev readv block ...passed 00:32:57.493 Test: blockdev writev readv size > 128k ...passed 00:32:57.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:57.493 Test: blockdev comparev and writev ...[2024-04-17 13:17:01.640503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9f60d000 len:0x1000 00:32:57.752 [2024-04-17 13:17:01.640715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:32:57.752 passed 00:32:57.752 Test: blockdev nvme passthru rw ...passed 00:32:57.752 Test: blockdev nvme passthru vendor specific ...[2024-04-17 13:17:01.641870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:32:57.752 [2024-04-17 13:17:01.642024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:32:57.752 passed 00:32:57.752 Test: blockdev nvme admin passthru ...passed 00:32:57.752 Test: blockdev copy ...passed 00:32:57.752 00:32:57.752 Run Summary: Type Total Ran Passed Failed Inactive 00:32:57.752 suites 1 1 n/a 0 0 00:32:57.752 tests 23 23 23 0 0 00:32:57.752 asserts 152 152 152 0 n/a 00:32:57.752 00:32:57.752 Elapsed time = 0.201 seconds 00:32:57.752 0 00:32:57.752 13:17:01 -- bdev/blockdev.sh@295 -- # killprocess 146680 00:32:57.752 13:17:01 -- common/autotest_common.sh@924 -- # '[' -z 146680 ']' 00:32:57.752 13:17:01 -- common/autotest_common.sh@928 -- # kill -0 146680 00:32:57.752 13:17:01 -- common/autotest_common.sh@929 -- # uname 00:32:57.752 13:17:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:32:57.752 13:17:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 146680 00:32:57.752 13:17:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:32:57.752 killing process with pid 146680 00:32:57.752 13:17:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:32:57.752 13:17:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 146680' 00:32:57.752 13:17:01 -- common/autotest_common.sh@943 -- # kill 146680 00:32:57.752 13:17:01 -- common/autotest_common.sh@948 -- # wait 146680 00:32:58.686 ************************************ 00:32:58.686 END TEST bdev_bounds 00:32:58.686 ************************************ 00:32:58.686 13:17:02 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:32:58.686 00:32:58.686 real 0m2.352s 00:32:58.686 user 0m5.505s 00:32:58.686 sys 0m0.333s 00:32:58.686 13:17:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:32:58.686 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:32:58.686 13:17:02 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:32:58.686 13:17:02 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:32:58.686 13:17:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:58.686 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:32:58.946 ************************************ 00:32:58.946 START TEST bdev_nbd 00:32:58.946 ************************************ 00:32:58.946 13:17:02 -- common/autotest_common.sh@1099 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:32:58.946 13:17:02 -- bdev/blockdev.sh@300 -- # uname -s 00:32:58.946 13:17:02 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:32:58.946 13:17:02 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:58.946 13:17:02 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:58.946 13:17:02 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:32:58.946 13:17:02 -- bdev/blockdev.sh@304 -- # local bdev_all 00:32:58.946 13:17:02 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:32:58.946 13:17:02 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:32:58.946 13:17:02 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:32:58.946 13:17:02 -- bdev/blockdev.sh@311 -- # local nbd_all 00:32:58.946 13:17:02 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:32:58.946 13:17:02 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:32:58.946 13:17:02 -- bdev/blockdev.sh@314 -- # local nbd_list 00:32:58.946 13:17:02 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:32:58.946 13:17:02 -- bdev/blockdev.sh@315 -- # local bdev_list 00:32:58.946 13:17:02 -- bdev/blockdev.sh@318 -- # nbd_pid=146750 00:32:58.946 13:17:02 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:32:58.946 13:17:02 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:58.946 13:17:02 -- bdev/blockdev.sh@320 -- # waitforlisten 146750 /var/tmp/spdk-nbd.sock 00:32:58.946 13:17:02 -- common/autotest_common.sh@817 -- # '[' -z 146750 ']' 00:32:58.946 13:17:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:58.946 13:17:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:58.946 13:17:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:58.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:58.946 13:17:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:58.946 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:32:58.946 [2024-04-17 13:17:02.935467] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:32:58.946 [2024-04-17 13:17:02.935903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.204 [2024-04-17 13:17:03.100863] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.204 [2024-04-17 13:17:03.315382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.771 13:17:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:59.771 13:17:03 -- common/autotest_common.sh@850 -- # return 0 00:32:59.771 13:17:03 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@24 -- # local i 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:32:59.771 13:17:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:00.029 13:17:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:00.029 13:17:04 -- common/autotest_common.sh@855 -- # local i 00:33:00.029 13:17:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:00.029 13:17:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:00.029 13:17:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:00.029 13:17:04 -- common/autotest_common.sh@859 -- # break 00:33:00.029 13:17:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:00.029 13:17:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:00.029 13:17:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:00.029 1+0 records in 00:33:00.029 1+0 records out 00:33:00.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673221 s, 6.1 MB/s 00:33:00.029 13:17:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:00.029 13:17:04 -- common/autotest_common.sh@872 -- # size=4096 00:33:00.029 13:17:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:00.029 13:17:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:00.029 13:17:04 -- common/autotest_common.sh@875 -- # return 0 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:00.029 13:17:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:00.286 13:17:04 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:00.286 { 00:33:00.286 "nbd_device": "/dev/nbd0", 00:33:00.286 "bdev_name": "Nvme0n1" 00:33:00.286 } 00:33:00.286 ]' 00:33:00.286 13:17:04 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:00.286 13:17:04 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:00.286 { 00:33:00.286 "nbd_device": "/dev/nbd0", 00:33:00.286 "bdev_name": "Nvme0n1" 00:33:00.287 } 00:33:00.287 ]' 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@51 -- # local i 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:00.287 13:17:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@41 -- # break 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:00.871 13:17:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@65 -- # true 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@65 -- # count=0 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@122 -- # count=0 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@127 -- # return 0 00:33:01.130 13:17:05 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@12 -- # local i 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:01.130 13:17:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:33:01.391 /dev/nbd0 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:01.391 13:17:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:01.391 13:17:05 -- common/autotest_common.sh@855 -- # local i 00:33:01.391 13:17:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:01.391 13:17:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:01.391 13:17:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:01.391 13:17:05 -- common/autotest_common.sh@859 -- # break 00:33:01.391 13:17:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:01.391 13:17:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:01.391 13:17:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:01.391 1+0 records in 00:33:01.391 1+0 records out 00:33:01.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597061 s, 6.9 MB/s 00:33:01.391 13:17:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:01.391 13:17:05 -- common/autotest_common.sh@872 -- # size=4096 00:33:01.391 13:17:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:01.391 13:17:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:01.391 13:17:05 -- common/autotest_common.sh@875 -- # return 0 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.391 13:17:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:01.650 { 00:33:01.650 "nbd_device": "/dev/nbd0", 00:33:01.650 "bdev_name": "Nvme0n1" 00:33:01.650 } 00:33:01.650 ]' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:01.650 { 00:33:01.650 "nbd_device": "/dev/nbd0", 00:33:01.650 "bdev_name": "Nvme0n1" 00:33:01.650 } 00:33:01.650 ]' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@65 -- # count=1 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@66 -- # echo 1 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@95 -- # count=1 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:01.650 256+0 records in 00:33:01.650 256+0 records out 00:33:01.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575884 s, 182 MB/s 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:01.650 256+0 records in 00:33:01.650 256+0 records out 00:33:01.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606615 s, 17.3 MB/s 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@51 -- # local i 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:01.650 13:17:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@41 -- # break 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@45 -- # return 0 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:02.217 13:17:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@65 -- # true 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@65 -- # count=0 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@104 -- # count=0 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@109 -- # return 0 00:33:02.476 13:17:06 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:02.476 13:17:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:02.734 malloc_lvol_verify 00:33:02.734 13:17:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:02.993 2d25305c-563e-4ac6-a7e1-d2496cbacc4b 00:33:02.993 13:17:06 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:03.252 1d75410a-ee0a-467b-af0a-cb0e0b7d7ae6 00:33:03.252 13:17:07 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:03.511 /dev/nbd0 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:03.511 mke2fs 1.45.5 (07-Jan-2020) 00:33:03.511 00:33:03.511 Filesystem too small for a journal 00:33:03.511 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:03.511 00:33:03.511 Allocating group tables: 0/1 done 00:33:03.511 Writing inode tables: 0/1 done 00:33:03.511 Writing superblocks and filesystem accounting information: 0/1 done 00:33:03.511 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@51 -- # local i 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:03.511 13:17:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@41 -- # break 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@45 -- # return 0 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:03.770 13:17:07 -- bdev/nbd_common.sh@147 -- # return 0 00:33:03.770 13:17:07 -- bdev/blockdev.sh@326 -- # killprocess 146750 00:33:03.770 13:17:07 -- common/autotest_common.sh@924 -- # '[' -z 146750 ']' 00:33:03.770 13:17:07 -- common/autotest_common.sh@928 -- # kill -0 146750 00:33:03.770 13:17:07 -- common/autotest_common.sh@929 -- # uname 00:33:03.770 13:17:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:33:03.770 13:17:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 146750 00:33:03.770 13:17:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:33:03.770 13:17:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:33:03.770 13:17:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 146750' 00:33:03.770 killing process with pid 146750 00:33:03.770 13:17:07 -- common/autotest_common.sh@943 -- # kill 146750 00:33:03.770 13:17:07 -- common/autotest_common.sh@948 -- # wait 146750 00:33:05.148 ************************************ 00:33:05.148 END TEST bdev_nbd 00:33:05.148 ************************************ 00:33:05.148 13:17:09 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:05.148 00:33:05.148 real 0m6.237s 00:33:05.148 user 0m9.172s 00:33:05.148 sys 0m1.148s 00:33:05.148 13:17:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:05.148 13:17:09 -- common/autotest_common.sh@10 -- # set +x 00:33:05.148 skipping fio tests on NVMe due to multi-ns failures. 00:33:05.148 13:17:09 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:05.148 13:17:09 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:33:05.148 13:17:09 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:33:05.148 13:17:09 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:05.148 13:17:09 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:05.148 13:17:09 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:33:05.148 13:17:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:05.148 13:17:09 -- common/autotest_common.sh@10 -- # set +x 00:33:05.148 ************************************ 00:33:05.148 START TEST bdev_verify 00:33:05.148 ************************************ 00:33:05.148 13:17:09 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:05.148 [2024-04-17 13:17:09.245508] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:05.148 [2024-04-17 13:17:09.245932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146962 ] 00:33:05.406 [2024-04-17 13:17:09.422269] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:05.664 [2024-04-17 13:17:09.695221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.664 [2024-04-17 13:17:09.695229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.664 [2024-04-17 13:17:09.746074] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:06.233 [2024-04-17 13:17:10.166181] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:06.233 Running I/O for 5 seconds... 00:33:11.500 00:33:11.500 Latency(us) 00:33:11.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:11.500 Verification LBA range: start 0x0 length 0xa0000 00:33:11.500 Nvme0n1 : 5.00 9592.34 37.47 0.00 0.00 13272.97 878.78 18588.39 00:33:11.500 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:11.500 Verification LBA range: start 0xa0000 length 0xa0000 00:33:11.500 Nvme0n1 : 5.01 9755.72 38.11 0.00 0.00 13044.91 714.94 20614.05 00:33:11.500 =================================================================================================================== 00:33:11.500 Total : 19348.06 75.58 0.00 0.00 13157.90 714.94 20614.05 00:33:12.436 ************************************ 00:33:12.436 END TEST bdev_verify 00:33:12.436 ************************************ 00:33:12.436 00:33:12.436 real 0m7.361s 00:33:12.436 user 0m13.408s 00:33:12.436 sys 0m0.293s 00:33:12.436 13:17:16 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:12.437 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:33:12.437 13:17:16 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:12.437 13:17:16 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:33:12.437 13:17:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:12.437 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:33:12.696 ************************************ 00:33:12.696 START TEST bdev_verify_big_io 00:33:12.696 ************************************ 00:33:12.696 13:17:16 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:12.696 [2024-04-17 13:17:16.680403] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:12.696 [2024-04-17 13:17:16.681484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147074 ] 00:33:12.954 [2024-04-17 13:17:16.856964] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:12.955 [2024-04-17 13:17:17.088240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.955 [2024-04-17 13:17:17.088255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.213 [2024-04-17 13:17:17.138776] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:13.519 [2024-04-17 13:17:17.526627] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:13.519 Running I/O for 5 seconds... 00:33:18.788 00:33:18.788 Latency(us) 00:33:18.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.788 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:18.788 Verification LBA range: start 0x0 length 0xa000 00:33:18.788 Nvme0n1 : 5.12 787.89 49.24 0.00 0.00 158603.51 307.20 174444.92 00:33:18.788 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:18.788 Verification LBA range: start 0xa000 length 0xa000 00:33:18.788 Nvme0n1 : 5.12 862.92 53.93 0.00 0.00 144746.62 155.46 175398.17 00:33:18.788 =================================================================================================================== 00:33:18.788 Total : 1650.81 103.18 0.00 0.00 151360.92 155.46 175398.17 00:33:20.166 ************************************ 00:33:20.166 END TEST bdev_verify_big_io 00:33:20.166 ************************************ 00:33:20.166 00:33:20.166 real 0m7.428s 00:33:20.166 user 0m13.575s 00:33:20.166 sys 0m0.275s 00:33:20.166 13:17:24 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:20.166 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:33:20.166 13:17:24 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:20.166 13:17:24 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:33:20.166 13:17:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:20.166 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:33:20.166 ************************************ 00:33:20.166 START TEST bdev_write_zeroes 00:33:20.166 ************************************ 00:33:20.166 13:17:24 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:20.166 [2024-04-17 13:17:24.203219] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:20.166 [2024-04-17 13:17:24.203694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147201 ] 00:33:20.424 [2024-04-17 13:17:24.376976] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.683 [2024-04-17 13:17:24.608668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.683 [2024-04-17 13:17:24.658450] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:20.941 [2024-04-17 13:17:25.053003] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:20.941 Running I/O for 1 seconds... 00:33:22.315 00:33:22.315 Latency(us) 00:33:22.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.315 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:22.315 Nvme0n1 : 1.00 54619.65 213.36 0.00 0.00 2336.99 755.90 13524.25 00:33:22.315 =================================================================================================================== 00:33:22.315 Total : 54619.65 213.36 0.00 0.00 2336.99 755.90 13524.25 00:33:23.291 ************************************ 00:33:23.291 END TEST bdev_write_zeroes 00:33:23.291 ************************************ 00:33:23.291 00:33:23.291 real 0m3.196s 00:33:23.291 user 0m2.808s 00:33:23.291 sys 0m0.277s 00:33:23.291 13:17:27 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:23.291 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:23.291 13:17:27 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:23.291 13:17:27 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:33:23.291 13:17:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:23.291 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:33:23.291 ************************************ 00:33:23.291 START TEST bdev_json_nonenclosed 00:33:23.291 ************************************ 00:33:23.291 13:17:27 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:23.558 [2024-04-17 13:17:27.468670] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:23.558 [2024-04-17 13:17:27.469291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147281 ] 00:33:23.558 [2024-04-17 13:17:27.639897] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.818 [2024-04-17 13:17:27.864931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.818 [2024-04-17 13:17:27.865418] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:23.818 [2024-04-17 13:17:27.865692] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:23.818 [2024-04-17 13:17:27.865972] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:24.385 ************************************ 00:33:24.385 END TEST bdev_json_nonenclosed 00:33:24.385 ************************************ 00:33:24.385 00:33:24.385 real 0m0.856s 00:33:24.385 user 0m0.608s 00:33:24.385 sys 0m0.145s 00:33:24.385 13:17:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:24.385 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:33:24.385 13:17:28 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.385 13:17:28 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:33:24.385 13:17:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:24.385 13:17:28 -- common/autotest_common.sh@10 -- # set +x 00:33:24.385 ************************************ 00:33:24.385 START TEST bdev_json_nonarray 00:33:24.385 ************************************ 00:33:24.385 13:17:28 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:24.385 [2024-04-17 13:17:28.406006] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:24.385 [2024-04-17 13:17:28.406680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147323 ] 00:33:24.644 [2024-04-17 13:17:28.576843] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.644 [2024-04-17 13:17:28.786831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.644 [2024-04-17 13:17:28.787472] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:24.644 [2024-04-17 13:17:28.787770] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:24.644 [2024-04-17 13:17:28.788092] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:25.212 00:33:25.212 real 0m0.848s 00:33:25.212 user 0m0.608s 00:33:25.212 sys 0m0.137s 00:33:25.212 13:17:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:25.212 ************************************ 00:33:25.212 END TEST bdev_json_nonarray 00:33:25.212 ************************************ 00:33:25.212 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.212 13:17:29 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:33:25.212 13:17:29 -- bdev/blockdev.sh@811 -- # cleanup 00:33:25.212 13:17:29 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:33:25.212 13:17:29 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:25.212 13:17:29 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:33:25.212 13:17:29 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:33:25.212 00:33:25.212 real 0m34.741s 00:33:25.212 user 0m51.600s 00:33:25.212 sys 0m3.671s 00:33:25.212 13:17:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:25.212 ************************************ 00:33:25.212 END TEST blockdev_nvme 00:33:25.212 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.212 ************************************ 00:33:25.212 13:17:29 -- spdk/autotest.sh@208 -- # uname -s 00:33:25.212 13:17:29 -- spdk/autotest.sh@208 -- # [[ Linux == Linux ]] 00:33:25.212 13:17:29 -- spdk/autotest.sh@209 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:33:25.212 13:17:29 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:33:25.212 13:17:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:25.212 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.212 ************************************ 00:33:25.212 START TEST blockdev_nvme_gpt 00:33:25.212 ************************************ 00:33:25.212 13:17:29 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:33:25.472 * Looking for test storage... 00:33:25.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:25.472 13:17:29 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:25.472 13:17:29 -- bdev/nbd_common.sh@6 -- # set -e 00:33:25.472 13:17:29 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:25.472 13:17:29 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:25.472 13:17:29 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:25.472 13:17:29 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:25.472 13:17:29 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:25.472 13:17:29 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:25.472 13:17:29 -- bdev/blockdev.sh@20 -- # : 00:33:25.472 13:17:29 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:25.472 13:17:29 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:25.472 13:17:29 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:25.472 13:17:29 -- bdev/blockdev.sh@674 -- # uname -s 00:33:25.472 13:17:29 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:25.472 13:17:29 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:25.472 13:17:29 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:33:25.472 13:17:29 -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:25.472 13:17:29 -- bdev/blockdev.sh@684 -- # dek= 00:33:25.472 13:17:29 -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:25.472 13:17:29 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:25.472 13:17:29 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:25.472 13:17:29 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:33:25.472 13:17:29 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:33:25.472 13:17:29 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:25.472 13:17:29 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=147414 00:33:25.472 13:17:29 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:25.472 13:17:29 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:25.472 13:17:29 -- bdev/blockdev.sh@49 -- # waitforlisten 147414 00:33:25.472 13:17:29 -- common/autotest_common.sh@817 -- # '[' -z 147414 ']' 00:33:25.472 13:17:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.472 13:17:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:25.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.472 13:17:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.473 13:17:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:25.473 13:17:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.473 [2024-04-17 13:17:29.484665] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:25.473 [2024-04-17 13:17:29.484891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147414 ] 00:33:25.731 [2024-04-17 13:17:29.657262] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.732 [2024-04-17 13:17:29.866091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.668 13:17:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:26.668 13:17:30 -- common/autotest_common.sh@850 -- # return 0 00:33:26.668 13:17:30 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:26.668 13:17:30 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:33:26.668 13:17:30 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:26.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:26.927 Waiting for block devices as requested 00:33:26.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:26.927 13:17:31 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:33:26.927 13:17:31 -- common/autotest_common.sh@1643 -- # zoned_devs=() 00:33:26.927 13:17:31 -- common/autotest_common.sh@1643 -- # local -gA zoned_devs 00:33:26.927 13:17:31 -- common/autotest_common.sh@1644 -- # local nvme bdf 00:33:26.927 13:17:31 -- common/autotest_common.sh@1646 -- # for nvme in /sys/block/nvme* 00:33:26.927 13:17:31 -- common/autotest_common.sh@1647 -- # is_block_zoned nvme0n1 00:33:26.927 13:17:31 -- common/autotest_common.sh@1636 -- # local device=nvme0n1 00:33:26.927 13:17:31 -- common/autotest_common.sh@1638 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:26.927 13:17:31 -- common/autotest_common.sh@1639 -- # [[ none != none ]] 00:33:26.927 13:17:31 -- bdev/blockdev.sh@107 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:33:26.927 13:17:31 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:33:26.927 13:17:31 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:33:26.927 13:17:31 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:33:26.927 13:17:31 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:33:26.927 13:17:31 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:33:26.927 13:17:31 -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:33:27.186 13:17:31 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:33:27.186 BYT; 00:33:27.186 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:33:27.186 13:17:31 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:33:27.186 BYT; 00:33:27.186 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:33:27.186 13:17:31 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:33:27.186 13:17:31 -- bdev/blockdev.sh@116 -- # break 00:33:27.186 13:17:31 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:33:27.186 13:17:31 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:33:27.186 13:17:31 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:33:27.186 13:17:31 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:33:27.446 13:17:31 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:33:27.446 13:17:31 -- scripts/common.sh@408 -- # local spdk_guid 00:33:27.446 13:17:31 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:33:27.446 13:17:31 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:27.446 13:17:31 -- scripts/common.sh@413 -- # IFS='()' 00:33:27.446 13:17:31 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:33:27.446 13:17:31 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:27.446 13:17:31 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:33:27.446 13:17:31 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:27.446 13:17:31 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:27.446 13:17:31 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:33:27.446 13:17:31 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:33:27.446 13:17:31 -- scripts/common.sh@420 -- # local spdk_guid 00:33:27.446 13:17:31 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:33:27.446 13:17:31 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:27.446 13:17:31 -- scripts/common.sh@425 -- # IFS='()' 00:33:27.446 13:17:31 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:33:27.446 13:17:31 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:33:27.446 13:17:31 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:33:27.446 13:17:31 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:27.446 13:17:31 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:27.446 13:17:31 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:33:27.446 13:17:31 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:33:28.383 The operation has completed successfully. 00:33:28.383 13:17:32 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:33:29.320 The operation has completed successfully. 00:33:29.320 13:17:33 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:29.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:29.887 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:30.825 13:17:34 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 [] 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:33:30.825 13:17:34 -- bdev/blockdev.sh@81 -- # local json 00:33:30.825 13:17:34 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:33:30.825 13:17:34 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:30.825 13:17:34 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@740 -- # cat 00:33:30.825 13:17:34 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:30.825 13:17:34 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:30.825 13:17:34 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:30.825 13:17:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:30.825 13:17:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.825 13:17:34 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:31.084 13:17:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:31.084 13:17:35 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:31.084 13:17:35 -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:31.084 13:17:35 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:33:31.084 13:17:35 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:31.084 13:17:35 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:33:31.084 13:17:35 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:31.084 13:17:35 -- bdev/blockdev.sh@754 -- # killprocess 147414 00:33:31.084 13:17:35 -- common/autotest_common.sh@924 -- # '[' -z 147414 ']' 00:33:31.084 13:17:35 -- common/autotest_common.sh@928 -- # kill -0 147414 00:33:31.084 13:17:35 -- common/autotest_common.sh@929 -- # uname 00:33:31.084 13:17:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:33:31.084 13:17:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147414 00:33:31.084 13:17:35 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:33:31.084 13:17:35 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:33:31.084 13:17:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147414' 00:33:31.084 killing process with pid 147414 00:33:31.084 13:17:35 -- common/autotest_common.sh@943 -- # kill 147414 00:33:31.084 13:17:35 -- common/autotest_common.sh@948 -- # wait 147414 00:33:33.619 13:17:37 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:33.619 13:17:37 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:33:33.619 13:17:37 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:33:33.619 13:17:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:33.619 13:17:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.619 ************************************ 00:33:33.619 START TEST bdev_hello_world 00:33:33.619 ************************************ 00:33:33.619 13:17:37 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:33:33.619 [2024-04-17 13:17:37.376937] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:33.619 [2024-04-17 13:17:37.377387] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147895 ] 00:33:33.619 [2024-04-17 13:17:37.547713] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.876 [2024-04-17 13:17:37.775253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.135 [2024-04-17 13:17:38.243229] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:34.135 [2024-04-17 13:17:38.243480] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:33:34.135 [2024-04-17 13:17:38.243620] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:34.135 [2024-04-17 13:17:38.246626] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:34.135 [2024-04-17 13:17:38.247149] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:34.135 [2024-04-17 13:17:38.247321] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:34.135 [2024-04-17 13:17:38.247648] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:34.135 00:33:34.135 [2024-04-17 13:17:38.247829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:35.510 00:33:35.510 real 0m2.112s 00:33:35.510 user 0m1.766s 00:33:35.510 sys 0m0.244s 00:33:35.511 13:17:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:35.511 13:17:39 -- common/autotest_common.sh@10 -- # set +x 00:33:35.511 ************************************ 00:33:35.511 END TEST bdev_hello_world 00:33:35.511 ************************************ 00:33:35.511 13:17:39 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:35.511 13:17:39 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:33:35.511 13:17:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:35.511 13:17:39 -- common/autotest_common.sh@10 -- # set +x 00:33:35.511 ************************************ 00:33:35.511 START TEST bdev_bounds 00:33:35.511 ************************************ 00:33:35.511 13:17:39 -- common/autotest_common.sh@1099 -- # bdev_bounds '' 00:33:35.511 13:17:39 -- bdev/blockdev.sh@290 -- # bdevio_pid=147970 00:33:35.511 13:17:39 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:35.511 Process bdevio pid: 147970 00:33:35.511 13:17:39 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 147970' 00:33:35.511 13:17:39 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:35.511 13:17:39 -- bdev/blockdev.sh@293 -- # waitforlisten 147970 00:33:35.511 13:17:39 -- common/autotest_common.sh@817 -- # '[' -z 147970 ']' 00:33:35.511 13:17:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.511 13:17:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:35.511 13:17:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.511 13:17:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:35.511 13:17:39 -- common/autotest_common.sh@10 -- # set +x 00:33:35.511 [2024-04-17 13:17:39.570056] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:35.511 [2024-04-17 13:17:39.570533] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147970 ] 00:33:35.769 [2024-04-17 13:17:39.752959] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:36.028 [2024-04-17 13:17:39.985101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.028 [2024-04-17 13:17:39.985233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.028 [2024-04-17 13:17:39.985223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.595 13:17:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:36.595 13:17:40 -- common/autotest_common.sh@850 -- # return 0 00:33:36.595 13:17:40 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:36.595 I/O targets: 00:33:36.595 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:33:36.595 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:33:36.595 00:33:36.595 00:33:36.595 CUnit - A unit testing framework for C - Version 2.1-3 00:33:36.595 http://cunit.sourceforge.net/ 00:33:36.595 00:33:36.595 00:33:36.595 Suite: bdevio tests on: Nvme0n1p2 00:33:36.595 Test: blockdev write read block ...passed 00:33:36.595 Test: blockdev write zeroes read block ...passed 00:33:36.595 Test: blockdev write zeroes read no split ...passed 00:33:36.595 Test: blockdev write zeroes read split ...passed 00:33:36.595 Test: blockdev write zeroes read split partial ...passed 00:33:36.595 Test: blockdev reset ...[2024-04-17 13:17:40.643737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:33:36.595 [2024-04-17 13:17:40.647544] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:36.595 passed 00:33:36.595 Test: blockdev write read 8 blocks ...passed 00:33:36.595 Test: blockdev write read size > 128k ...passed 00:33:36.595 Test: blockdev write read invalid size ...passed 00:33:36.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:36.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:36.595 Test: blockdev write read max offset ...passed 00:33:36.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:36.595 Test: blockdev writev readv 8 blocks ...passed 00:33:36.595 Test: blockdev writev readv 30 x 1block ...passed 00:33:36.595 Test: blockdev writev readv block ...passed 00:33:36.595 Test: blockdev writev readv size > 128k ...passed 00:33:36.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:36.595 Test: blockdev comparev and writev ...[2024-04-17 13:17:40.657267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xa200b000 len:0x1000 00:33:36.595 [2024-04-17 13:17:40.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:33:36.595 passed 00:33:36.595 Test: blockdev nvme passthru rw ...passed 00:33:36.595 Test: blockdev nvme passthru vendor specific ...passed 00:33:36.595 Test: blockdev nvme admin passthru ...passed 00:33:36.595 Test: blockdev copy ...passed 00:33:36.595 Suite: bdevio tests on: Nvme0n1p1 00:33:36.595 Test: blockdev write read block ...passed 00:33:36.595 Test: blockdev write zeroes read block ...passed 00:33:36.595 Test: blockdev write zeroes read no split ...passed 00:33:36.595 Test: blockdev write zeroes read split ...passed 00:33:36.595 Test: blockdev write zeroes read split partial ...passed 00:33:36.595 Test: blockdev reset ...[2024-04-17 13:17:40.716287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:33:36.595 [2024-04-17 13:17:40.720235] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:36.595 passed 00:33:36.595 Test: blockdev write read 8 blocks ...passed 00:33:36.595 Test: blockdev write read size > 128k ...passed 00:33:36.595 Test: blockdev write read invalid size ...passed 00:33:36.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:36.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:36.595 Test: blockdev write read max offset ...passed 00:33:36.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:36.595 Test: blockdev writev readv 8 blocks ...passed 00:33:36.595 Test: blockdev writev readv 30 x 1block ...passed 00:33:36.595 Test: blockdev writev readv block ...passed 00:33:36.595 Test: blockdev writev readv size > 128k ...passed 00:33:36.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:36.595 Test: blockdev comparev and writev ...[2024-04-17 13:17:40.730135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xa200d000 len:0x1000 00:33:36.595 [2024-04-17 13:17:40.730363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:33:36.595 passed 00:33:36.595 Test: blockdev nvme passthru rw ...passed 00:33:36.595 Test: blockdev nvme passthru vendor specific ...passed 00:33:36.595 Test: blockdev nvme admin passthru ...passed 00:33:36.595 Test: blockdev copy ...passed 00:33:36.595 00:33:36.595 Run Summary: Type Total Ran Passed Failed Inactive 00:33:36.595 suites 2 2 n/a 0 0 00:33:36.595 tests 46 46 46 0 0 00:33:36.595 asserts 284 284 284 0 n/a 00:33:36.595 00:33:36.595 Elapsed time = 0.398 seconds 00:33:36.595 0 00:33:36.855 13:17:40 -- bdev/blockdev.sh@295 -- # killprocess 147970 00:33:36.855 13:17:40 -- common/autotest_common.sh@924 -- # '[' -z 147970 ']' 00:33:36.855 13:17:40 -- common/autotest_common.sh@928 -- # kill -0 147970 00:33:36.855 13:17:40 -- common/autotest_common.sh@929 -- # uname 00:33:36.855 13:17:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:33:36.855 13:17:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147970 00:33:36.855 killing process with pid 147970 00:33:36.855 13:17:40 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:33:36.855 13:17:40 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:33:36.855 13:17:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147970' 00:33:36.855 13:17:40 -- common/autotest_common.sh@943 -- # kill 147970 00:33:36.855 13:17:40 -- common/autotest_common.sh@948 -- # wait 147970 00:33:37.792 ************************************ 00:33:37.792 END TEST bdev_bounds 00:33:37.792 ************************************ 00:33:37.792 13:17:41 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:37.792 00:33:37.792 real 0m2.405s 00:33:37.792 user 0m5.541s 00:33:37.792 sys 0m0.330s 00:33:37.792 13:17:41 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:37.792 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:33:38.052 13:17:41 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:33:38.052 13:17:41 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:33:38.052 13:17:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:38.052 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:33:38.052 ************************************ 00:33:38.052 START TEST bdev_nbd 00:33:38.052 ************************************ 00:33:38.052 13:17:41 -- common/autotest_common.sh@1099 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:33:38.052 13:17:41 -- bdev/blockdev.sh@300 -- # uname -s 00:33:38.052 13:17:41 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:38.052 13:17:41 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:38.052 13:17:41 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:38.052 13:17:41 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:33:38.052 13:17:41 -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:38.052 13:17:41 -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:33:38.052 13:17:41 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:38.052 13:17:41 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:33:38.052 13:17:41 -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:38.052 13:17:41 -- bdev/blockdev.sh@312 -- # bdev_num=2 00:33:38.052 13:17:41 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:33:38.052 13:17:41 -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:38.052 13:17:41 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:33:38.052 13:17:41 -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:38.052 13:17:41 -- bdev/blockdev.sh@318 -- # nbd_pid=148037 00:33:38.052 13:17:41 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:38.052 13:17:41 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:38.052 13:17:41 -- bdev/blockdev.sh@320 -- # waitforlisten 148037 /var/tmp/spdk-nbd.sock 00:33:38.052 13:17:41 -- common/autotest_common.sh@817 -- # '[' -z 148037 ']' 00:33:38.052 13:17:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:38.052 13:17:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:38.052 13:17:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:38.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:38.052 13:17:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:38.052 13:17:41 -- common/autotest_common.sh@10 -- # set +x 00:33:38.052 [2024-04-17 13:17:42.052181] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:38.052 [2024-04-17 13:17:42.052493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.311 [2024-04-17 13:17:42.211335] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.311 [2024-04-17 13:17:42.423959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.878 13:17:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:38.878 13:17:43 -- common/autotest_common.sh@850 -- # return 0 00:33:38.878 13:17:43 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@24 -- # local i 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:38.878 13:17:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:33:39.137 13:17:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:39.137 13:17:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:39.137 13:17:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:39.137 13:17:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:39.137 13:17:43 -- common/autotest_common.sh@855 -- # local i 00:33:39.137 13:17:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:39.137 13:17:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:39.137 13:17:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:39.137 13:17:43 -- common/autotest_common.sh@859 -- # break 00:33:39.137 13:17:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:39.137 13:17:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:39.137 13:17:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:39.137 1+0 records in 00:33:39.137 1+0 records out 00:33:39.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642043 s, 6.4 MB/s 00:33:39.137 13:17:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.137 13:17:43 -- common/autotest_common.sh@872 -- # size=4096 00:33:39.137 13:17:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.396 13:17:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:39.396 13:17:43 -- common/autotest_common.sh@875 -- # return 0 00:33:39.396 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:39.396 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:39.396 13:17:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:33:39.654 13:17:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:39.654 13:17:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:39.655 13:17:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:39.655 13:17:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:39.655 13:17:43 -- common/autotest_common.sh@855 -- # local i 00:33:39.655 13:17:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:39.655 13:17:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:39.655 13:17:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:39.655 13:17:43 -- common/autotest_common.sh@859 -- # break 00:33:39.655 13:17:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:39.655 13:17:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:39.655 13:17:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:39.655 1+0 records in 00:33:39.655 1+0 records out 00:33:39.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619582 s, 6.6 MB/s 00:33:39.655 13:17:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.655 13:17:43 -- common/autotest_common.sh@872 -- # size=4096 00:33:39.655 13:17:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.655 13:17:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:39.655 13:17:43 -- common/autotest_common.sh@875 -- # return 0 00:33:39.655 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:39.655 13:17:43 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:33:39.655 13:17:43 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:39.914 { 00:33:39.914 "nbd_device": "/dev/nbd0", 00:33:39.914 "bdev_name": "Nvme0n1p1" 00:33:39.914 }, 00:33:39.914 { 00:33:39.914 "nbd_device": "/dev/nbd1", 00:33:39.914 "bdev_name": "Nvme0n1p2" 00:33:39.914 } 00:33:39.914 ]' 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:39.914 { 00:33:39.914 "nbd_device": "/dev/nbd0", 00:33:39.914 "bdev_name": "Nvme0n1p1" 00:33:39.914 }, 00:33:39.914 { 00:33:39.914 "nbd_device": "/dev/nbd1", 00:33:39.914 "bdev_name": "Nvme0n1p2" 00:33:39.914 } 00:33:39.914 ]' 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@51 -- # local i 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:39.914 13:17:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@41 -- # break 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@45 -- # return 0 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:40.173 13:17:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@41 -- # break 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@45 -- # return 0 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:40.432 13:17:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@65 -- # true 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@65 -- # count=0 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@122 -- # count=0 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@127 -- # return 0 00:33:40.690 13:17:44 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@12 -- # local i 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:40.690 13:17:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:33:40.949 /dev/nbd0 00:33:41.208 13:17:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:41.208 13:17:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:41.208 13:17:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:41.208 13:17:45 -- common/autotest_common.sh@855 -- # local i 00:33:41.208 13:17:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:41.208 13:17:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:41.208 13:17:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:41.208 13:17:45 -- common/autotest_common.sh@859 -- # break 00:33:41.208 13:17:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:41.208 13:17:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:41.208 13:17:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:41.208 1+0 records in 00:33:41.209 1+0 records out 00:33:41.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703575 s, 5.8 MB/s 00:33:41.209 13:17:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.209 13:17:45 -- common/autotest_common.sh@872 -- # size=4096 00:33:41.209 13:17:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.209 13:17:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:41.209 13:17:45 -- common/autotest_common.sh@875 -- # return 0 00:33:41.209 13:17:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:41.209 13:17:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:41.209 13:17:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:33:41.468 /dev/nbd1 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:41.468 13:17:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:33:41.468 13:17:45 -- common/autotest_common.sh@855 -- # local i 00:33:41.468 13:17:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:41.468 13:17:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:41.468 13:17:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:33:41.468 13:17:45 -- common/autotest_common.sh@859 -- # break 00:33:41.468 13:17:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:41.468 13:17:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:41.468 13:17:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:41.468 1+0 records in 00:33:41.468 1+0 records out 00:33:41.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466085 s, 8.8 MB/s 00:33:41.468 13:17:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.468 13:17:45 -- common/autotest_common.sh@872 -- # size=4096 00:33:41.468 13:17:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.468 13:17:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:41.468 13:17:45 -- common/autotest_common.sh@875 -- # return 0 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:41.468 13:17:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:41.727 { 00:33:41.727 "nbd_device": "/dev/nbd0", 00:33:41.727 "bdev_name": "Nvme0n1p1" 00:33:41.727 }, 00:33:41.727 { 00:33:41.727 "nbd_device": "/dev/nbd1", 00:33:41.727 "bdev_name": "Nvme0n1p2" 00:33:41.727 } 00:33:41.727 ]' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:41.727 { 00:33:41.727 "nbd_device": "/dev/nbd0", 00:33:41.727 "bdev_name": "Nvme0n1p1" 00:33:41.727 }, 00:33:41.727 { 00:33:41.727 "nbd_device": "/dev/nbd1", 00:33:41.727 "bdev_name": "Nvme0n1p2" 00:33:41.727 } 00:33:41.727 ]' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:41.727 /dev/nbd1' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:41.727 /dev/nbd1' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@65 -- # count=2 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@95 -- # count=2 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:41.727 256+0 records in 00:33:41.727 256+0 records out 00:33:41.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658257 s, 159 MB/s 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:41.727 256+0 records in 00:33:41.727 256+0 records out 00:33:41.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0805883 s, 13.0 MB/s 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:41.727 13:17:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:41.986 256+0 records in 00:33:41.987 256+0 records out 00:33:41.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0762946 s, 13.7 MB/s 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@51 -- # local i 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:41.987 13:17:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@41 -- # break 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:42.246 13:17:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@41 -- # break 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:42.504 13:17:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@65 -- # true 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@65 -- # count=0 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@104 -- # count=0 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@109 -- # return 0 00:33:42.764 13:17:46 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:42.764 13:17:46 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:43.023 malloc_lvol_verify 00:33:43.024 13:17:47 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:43.282 30db8a01-3d93-48bd-8b56-e2e7ee79c4ea 00:33:43.282 13:17:47 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:43.541 6b9c5c11-6812-428b-850a-eb642dcd7026 00:33:43.542 13:17:47 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:43.801 /dev/nbd0 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:43.801 mke2fs 1.45.5 (07-Jan-2020) 00:33:43.801 00:33:43.801 Filesystem too small for a journal 00:33:43.801 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:43.801 00:33:43.801 Allocating group tables: 0/1 done 00:33:43.801 Writing inode tables: 0/1 done 00:33:43.801 Writing superblocks and filesystem accounting information: 0/1 done 00:33:43.801 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@51 -- # local i 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:43.801 13:17:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:44.059 13:17:48 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@41 -- # break 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@45 -- # return 0 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:44.318 13:17:48 -- bdev/nbd_common.sh@147 -- # return 0 00:33:44.318 13:17:48 -- bdev/blockdev.sh@326 -- # killprocess 148037 00:33:44.318 13:17:48 -- common/autotest_common.sh@924 -- # '[' -z 148037 ']' 00:33:44.318 13:17:48 -- common/autotest_common.sh@928 -- # kill -0 148037 00:33:44.318 13:17:48 -- common/autotest_common.sh@929 -- # uname 00:33:44.318 13:17:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:33:44.318 13:17:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 148037 00:33:44.318 killing process with pid 148037 00:33:44.318 13:17:48 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:33:44.318 13:17:48 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:33:44.318 13:17:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 148037' 00:33:44.318 13:17:48 -- common/autotest_common.sh@943 -- # kill 148037 00:33:44.318 13:17:48 -- common/autotest_common.sh@948 -- # wait 148037 00:33:45.696 ************************************ 00:33:45.696 END TEST bdev_nbd 00:33:45.696 ************************************ 00:33:45.696 13:17:49 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:45.696 00:33:45.696 real 0m7.456s 00:33:45.696 user 0m10.779s 00:33:45.696 sys 0m1.577s 00:33:45.696 13:17:49 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:45.696 13:17:49 -- common/autotest_common.sh@10 -- # set +x 00:33:45.696 13:17:49 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:45.696 skipping fio tests on NVMe due to multi-ns failures. 00:33:45.696 13:17:49 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:33:45.696 13:17:49 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:33:45.696 13:17:49 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:33:45.696 13:17:49 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:45.696 13:17:49 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:45.696 13:17:49 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:33:45.696 13:17:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:45.696 13:17:49 -- common/autotest_common.sh@10 -- # set +x 00:33:45.696 ************************************ 00:33:45.696 START TEST bdev_verify 00:33:45.696 ************************************ 00:33:45.696 13:17:49 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:45.696 [2024-04-17 13:17:49.573449] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:45.696 [2024-04-17 13:17:49.573853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148330 ] 00:33:45.696 [2024-04-17 13:17:49.738109] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:45.955 [2024-04-17 13:17:49.955166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.955 [2024-04-17 13:17:49.955171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.955 [2024-04-17 13:17:50.005359] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:46.521 [2024-04-17 13:17:50.389587] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:46.521 Running I/O for 5 seconds... 00:33:51.789 00:33:51.789 Latency(us) 00:33:51.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.789 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.789 Verification LBA range: start 0x0 length 0x4ff80 00:33:51.789 Nvme0n1p1 : 5.02 5171.09 20.20 0.00 0.00 24677.45 3932.16 44326.17 00:33:51.789 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.789 Verification LBA range: start 0x4ff80 length 0x4ff80 00:33:51.789 Nvme0n1p1 : 5.03 5092.25 19.89 0.00 0.00 25047.54 3172.54 49807.36 00:33:51.789 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:51.789 Verification LBA range: start 0x0 length 0x4ff7f 00:33:51.789 Nvme0n1p2 : 5.03 5168.32 20.19 0.00 0.00 24651.23 3619.37 44802.79 00:33:51.789 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:51.789 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:33:51.789 Nvme0n1p2 : 5.03 5089.77 19.88 0.00 0.00 24989.73 4349.21 50283.99 00:33:51.789 =================================================================================================================== 00:33:51.790 Total : 20521.43 80.16 0.00 0.00 24840.17 3172.54 50283.99 00:33:52.725 ************************************ 00:33:52.725 END TEST bdev_verify 00:33:52.725 ************************************ 00:33:52.725 00:33:52.725 real 0m7.213s 00:33:52.725 user 0m13.238s 00:33:52.725 sys 0m0.246s 00:33:52.725 13:17:56 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:33:52.725 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:33:52.725 13:17:56 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:52.725 13:17:56 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:33:52.725 13:17:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:33:52.725 13:17:56 -- common/autotest_common.sh@10 -- # set +x 00:33:52.725 ************************************ 00:33:52.725 START TEST bdev_verify_big_io 00:33:52.725 ************************************ 00:33:52.725 13:17:56 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:52.984 [2024-04-17 13:17:56.881937] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:33:52.984 [2024-04-17 13:17:56.882360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148435 ] 00:33:52.984 [2024-04-17 13:17:57.054849] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:53.246 [2024-04-17 13:17:57.271222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:53.246 [2024-04-17 13:17:57.271233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.246 [2024-04-17 13:17:57.321531] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:53.813 [2024-04-17 13:17:57.707762] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:33:53.813 Running I/O for 5 seconds... 00:33:59.086 00:33:59.086 Latency(us) 00:33:59.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.086 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:59.086 Verification LBA range: start 0x0 length 0x4ff8 00:33:59.086 Nvme0n1p1 : 5.23 440.38 27.52 0.00 0.00 285230.95 5540.77 316479.30 00:33:59.086 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:59.086 Verification LBA range: start 0x4ff8 length 0x4ff8 00:33:59.086 Nvme0n1p1 : 5.22 416.47 26.03 0.00 0.00 291860.23 2800.17 341263.83 00:33:59.086 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:59.086 Verification LBA range: start 0x0 length 0x4ff7 00:33:59.086 Nvme0n1p2 : 5.23 439.60 27.48 0.00 0.00 277658.42 3142.75 320292.31 00:33:59.086 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:59.086 Verification LBA range: start 0x4ff7 length 0x4ff7 00:33:59.086 Nvme0n1p2 : 5.13 398.92 24.93 0.00 0.00 312691.43 5302.46 339357.32 00:33:59.086 =================================================================================================================== 00:33:59.086 Total : 1695.37 105.96 0.00 0.00 291261.07 2800.17 341263.83 00:34:00.124 ************************************ 00:34:00.124 END TEST bdev_verify_big_io 00:34:00.124 ************************************ 00:34:00.124 00:34:00.124 real 0m7.464s 00:34:00.124 user 0m13.692s 00:34:00.124 sys 0m0.250s 00:34:00.124 13:18:04 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:00.124 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:34:00.385 13:18:04 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:00.385 13:18:04 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:34:00.385 13:18:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:00.385 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:34:00.385 ************************************ 00:34:00.385 START TEST bdev_write_zeroes 00:34:00.385 ************************************ 00:34:00.385 13:18:04 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:00.385 [2024-04-17 13:18:04.400048] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:00.385 [2024-04-17 13:18:04.400388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148561 ] 00:34:00.672 [2024-04-17 13:18:04.564680] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.970 [2024-04-17 13:18:04.810004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.970 [2024-04-17 13:18:04.860019] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:34:01.228 [2024-04-17 13:18:05.237293] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:34:01.228 Running I/O for 1 seconds... 00:34:02.162 00:34:02.162 Latency(us) 00:34:02.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.162 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:02.162 Nvme0n1p1 : 1.00 26437.83 103.27 0.00 0.00 4829.09 2636.33 15013.70 00:34:02.162 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:02.162 Nvme0n1p2 : 1.01 26452.30 103.33 0.00 0.00 4824.06 2412.92 14775.39 00:34:02.162 =================================================================================================================== 00:34:02.162 Total : 52890.13 206.60 0.00 0.00 4826.58 2412.92 15013.70 00:34:03.540 ************************************ 00:34:03.540 END TEST bdev_write_zeroes 00:34:03.540 ************************************ 00:34:03.540 00:34:03.540 real 0m3.034s 00:34:03.540 user 0m2.702s 00:34:03.540 sys 0m0.232s 00:34:03.540 13:18:07 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:03.540 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:34:03.540 13:18:07 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:03.540 13:18:07 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:34:03.540 13:18:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:03.540 13:18:07 -- common/autotest_common.sh@10 -- # set +x 00:34:03.540 ************************************ 00:34:03.540 START TEST bdev_json_nonenclosed 00:34:03.540 ************************************ 00:34:03.540 13:18:07 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:03.540 [2024-04-17 13:18:07.512240] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:03.540 [2024-04-17 13:18:07.512574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148622 ] 00:34:03.540 [2024-04-17 13:18:07.674456] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.799 [2024-04-17 13:18:07.915259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.799 [2024-04-17 13:18:07.915520] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:03.799 [2024-04-17 13:18:07.915667] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:03.799 [2024-04-17 13:18:07.915722] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:04.367 00:34:04.367 real 0m0.853s 00:34:04.367 user 0m0.628s 00:34:04.367 sys 0m0.125s 00:34:04.367 13:18:08 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:04.367 13:18:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.367 ************************************ 00:34:04.367 END TEST bdev_json_nonenclosed 00:34:04.367 ************************************ 00:34:04.367 13:18:08 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:04.367 13:18:08 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:34:04.367 13:18:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:04.367 13:18:08 -- common/autotest_common.sh@10 -- # set +x 00:34:04.367 ************************************ 00:34:04.367 START TEST bdev_json_nonarray 00:34:04.367 ************************************ 00:34:04.367 13:18:08 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:04.367 [2024-04-17 13:18:08.452361] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:04.367 [2024-04-17 13:18:08.452852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148678 ] 00:34:04.625 [2024-04-17 13:18:08.621377] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.885 [2024-04-17 13:18:08.832407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.885 [2024-04-17 13:18:08.832801] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:04.885 [2024-04-17 13:18:08.832991] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:04.885 [2024-04-17 13:18:08.833106] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:05.145 ************************************ 00:34:05.145 END TEST bdev_json_nonarray 00:34:05.145 ************************************ 00:34:05.145 00:34:05.145 real 0m0.841s 00:34:05.145 user 0m0.592s 00:34:05.145 sys 0m0.148s 00:34:05.145 13:18:09 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:05.145 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:34:05.145 13:18:09 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:34:05.145 13:18:09 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:34:05.145 13:18:09 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:34:05.145 13:18:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:05.145 13:18:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:05.145 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:34:05.404 ************************************ 00:34:05.404 START TEST bdev_gpt_uuid 00:34:05.404 ************************************ 00:34:05.404 13:18:09 -- common/autotest_common.sh@1099 -- # bdev_gpt_uuid 00:34:05.404 13:18:09 -- bdev/blockdev.sh@614 -- # local bdev 00:34:05.404 13:18:09 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:34:05.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.404 13:18:09 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=148712 00:34:05.404 13:18:09 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:05.404 13:18:09 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:05.404 13:18:09 -- bdev/blockdev.sh@49 -- # waitforlisten 148712 00:34:05.404 13:18:09 -- common/autotest_common.sh@817 -- # '[' -z 148712 ']' 00:34:05.404 13:18:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.404 13:18:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:05.405 13:18:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.405 13:18:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:05.405 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:34:05.405 [2024-04-17 13:18:09.384158] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:05.405 [2024-04-17 13:18:09.384516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148712 ] 00:34:05.663 [2024-04-17 13:18:09.556255] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.663 [2024-04-17 13:18:09.770834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.601 13:18:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:06.601 13:18:10 -- common/autotest_common.sh@850 -- # return 0 00:34:06.601 13:18:10 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:06.601 13:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:06.601 13:18:10 -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 Some configs were skipped because the RPC state that can call them passed over. 00:34:06.601 13:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:06.601 13:18:10 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:34:06.601 13:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:06.601 13:18:10 -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:06.601 13:18:10 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:34:06.601 13:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:06.601 13:18:10 -- common/autotest_common.sh@10 -- # set +x 00:34:06.601 13:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:06.601 13:18:10 -- bdev/blockdev.sh@621 -- # bdev='[ 00:34:06.601 { 00:34:06.601 "name": "Nvme0n1p1", 00:34:06.601 "aliases": [ 00:34:06.601 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:34:06.601 ], 00:34:06.601 "product_name": "GPT Disk", 00:34:06.601 "block_size": 4096, 00:34:06.601 "num_blocks": 655104, 00:34:06.601 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:34:06.601 "assigned_rate_limits": { 00:34:06.601 "rw_ios_per_sec": 0, 00:34:06.601 "rw_mbytes_per_sec": 0, 00:34:06.601 "r_mbytes_per_sec": 0, 00:34:06.601 "w_mbytes_per_sec": 0 00:34:06.601 }, 00:34:06.601 "claimed": false, 00:34:06.601 "zoned": false, 00:34:06.601 "supported_io_types": { 00:34:06.601 "read": true, 00:34:06.601 "write": true, 00:34:06.601 "unmap": true, 00:34:06.601 "write_zeroes": true, 00:34:06.601 "flush": true, 00:34:06.601 "reset": true, 00:34:06.601 "compare": true, 00:34:06.601 "compare_and_write": false, 00:34:06.601 "abort": true, 00:34:06.601 "nvme_admin": false, 00:34:06.601 "nvme_io": false 00:34:06.601 }, 00:34:06.601 "driver_specific": { 00:34:06.601 "gpt": { 00:34:06.601 "base_bdev": "Nvme0n1", 00:34:06.601 "offset_blocks": 256, 00:34:06.601 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:34:06.601 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:34:06.601 "partition_name": "SPDK_TEST_first" 00:34:06.601 } 00:34:06.601 } 00:34:06.601 } 00:34:06.601 ]' 00:34:06.601 13:18:10 -- bdev/blockdev.sh@622 -- # jq -r length 00:34:06.860 13:18:10 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:34:06.860 13:18:10 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:34:06.860 13:18:10 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:34:06.860 13:18:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:06.860 13:18:10 -- common/autotest_common.sh@10 -- # set +x 00:34:06.860 13:18:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@626 -- # bdev='[ 00:34:06.860 { 00:34:06.860 "name": "Nvme0n1p2", 00:34:06.860 "aliases": [ 00:34:06.860 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:34:06.860 ], 00:34:06.860 "product_name": "GPT Disk", 00:34:06.860 "block_size": 4096, 00:34:06.860 "num_blocks": 655103, 00:34:06.860 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:34:06.860 "assigned_rate_limits": { 00:34:06.860 "rw_ios_per_sec": 0, 00:34:06.860 "rw_mbytes_per_sec": 0, 00:34:06.860 "r_mbytes_per_sec": 0, 00:34:06.860 "w_mbytes_per_sec": 0 00:34:06.860 }, 00:34:06.860 "claimed": false, 00:34:06.860 "zoned": false, 00:34:06.860 "supported_io_types": { 00:34:06.860 "read": true, 00:34:06.860 "write": true, 00:34:06.860 "unmap": true, 00:34:06.860 "write_zeroes": true, 00:34:06.860 "flush": true, 00:34:06.860 "reset": true, 00:34:06.860 "compare": true, 00:34:06.860 "compare_and_write": false, 00:34:06.860 "abort": true, 00:34:06.860 "nvme_admin": false, 00:34:06.860 "nvme_io": false 00:34:06.860 }, 00:34:06.860 "driver_specific": { 00:34:06.860 "gpt": { 00:34:06.860 "base_bdev": "Nvme0n1", 00:34:06.860 "offset_blocks": 655360, 00:34:06.860 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:34:06.860 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:34:06.860 "partition_name": "SPDK_TEST_second" 00:34:06.860 } 00:34:06.860 } 00:34:06.860 } 00:34:06.860 ]' 00:34:06.860 13:18:10 -- bdev/blockdev.sh@627 -- # jq -r length 00:34:06.860 13:18:10 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:34:06.860 13:18:10 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:34:06.860 13:18:10 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:34:07.119 13:18:11 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:34:07.119 13:18:11 -- bdev/blockdev.sh@631 -- # killprocess 148712 00:34:07.119 13:18:11 -- common/autotest_common.sh@924 -- # '[' -z 148712 ']' 00:34:07.119 13:18:11 -- common/autotest_common.sh@928 -- # kill -0 148712 00:34:07.119 13:18:11 -- common/autotest_common.sh@929 -- # uname 00:34:07.119 13:18:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:34:07.119 13:18:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 148712 00:34:07.119 13:18:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:34:07.119 13:18:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:34:07.119 13:18:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 148712' 00:34:07.119 killing process with pid 148712 00:34:07.119 13:18:11 -- common/autotest_common.sh@943 -- # kill 148712 00:34:07.119 13:18:11 -- common/autotest_common.sh@948 -- # wait 148712 00:34:09.667 ************************************ 00:34:09.667 END TEST bdev_gpt_uuid 00:34:09.667 ************************************ 00:34:09.667 00:34:09.667 real 0m3.912s 00:34:09.667 user 0m4.170s 00:34:09.667 sys 0m0.561s 00:34:09.667 13:18:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:09.667 13:18:13 -- common/autotest_common.sh@10 -- # set +x 00:34:09.667 13:18:13 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:34:09.667 13:18:13 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:09.667 13:18:13 -- bdev/blockdev.sh@811 -- # cleanup 00:34:09.667 13:18:13 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:09.667 13:18:13 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:09.667 13:18:13 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:34:09.667 13:18:13 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:34:09.667 13:18:13 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:34:09.667 13:18:13 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:09.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:09.667 Waiting for block devices as requested 00:34:09.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:09.668 13:18:13 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:34:09.668 13:18:13 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:34:09.668 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:34:09.668 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:34:09.668 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:34:09.668 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:34:09.668 13:18:13 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:34:09.668 ************************************ 00:34:09.668 END TEST blockdev_nvme_gpt 00:34:09.668 ************************************ 00:34:09.668 00:34:09.668 real 0m44.444s 00:34:09.668 user 1m2.190s 00:34:09.668 sys 0m6.148s 00:34:09.668 13:18:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:09.668 13:18:13 -- common/autotest_common.sh@10 -- # set +x 00:34:09.668 13:18:13 -- spdk/autotest.sh@211 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:34:09.668 13:18:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:09.668 13:18:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:09.668 13:18:13 -- common/autotest_common.sh@10 -- # set +x 00:34:09.926 ************************************ 00:34:09.926 START TEST nvme 00:34:09.926 ************************************ 00:34:09.926 13:18:13 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:34:09.926 * Looking for test storage... 00:34:09.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:09.926 13:18:13 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:10.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:10.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:11.378 13:18:15 -- nvme/nvme.sh@79 -- # uname 00:34:11.378 13:18:15 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:34:11.378 13:18:15 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:34:11.378 13:18:15 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:34:11.378 13:18:15 -- common/autotest_common.sh@1056 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:34:11.378 Waiting for stub to ready for secondary processes... 00:34:11.378 13:18:15 -- common/autotest_common.sh@1042 -- # _randomize_va_space=2 00:34:11.378 13:18:15 -- common/autotest_common.sh@1043 -- # echo 0 00:34:11.378 13:18:15 -- common/autotest_common.sh@1045 -- # stubpid=149144 00:34:11.378 13:18:15 -- common/autotest_common.sh@1044 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:34:11.378 13:18:15 -- common/autotest_common.sh@1046 -- # echo Waiting for stub to ready for secondary processes... 00:34:11.378 13:18:15 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:34:11.378 13:18:15 -- common/autotest_common.sh@1049 -- # [[ -e /proc/149144 ]] 00:34:11.378 13:18:15 -- common/autotest_common.sh@1050 -- # sleep 1s 00:34:11.640 [2024-04-17 13:18:15.538912] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:11.640 [2024-04-17 13:18:15.539393] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.584 13:18:16 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:34:12.584 13:18:16 -- common/autotest_common.sh@1049 -- # [[ -e /proc/149144 ]] 00:34:12.584 13:18:16 -- common/autotest_common.sh@1050 -- # sleep 1s 00:34:12.844 [2024-04-17 13:18:16.847837] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:13.102 [2024-04-17 13:18:17.073487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:13.102 [2024-04-17 13:18:17.073630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:13.102 [2024-04-17 13:18:17.073637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.102 [2024-04-17 13:18:17.074797] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:34:13.103 [2024-04-17 13:18:17.075033] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:34:13.103 [2024-04-17 13:18:17.084555] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:34:13.103 [2024-04-17 13:18:17.084803] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:34:13.103 [2024-04-17 13:18:17.092905] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:34:13.103 [2024-04-17 13:18:17.093311] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:34:13.362 13:18:17 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:34:13.362 done. 00:34:13.362 13:18:17 -- common/autotest_common.sh@1052 -- # echo done. 00:34:13.362 13:18:17 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:34:13.362 13:18:17 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:34:13.362 13:18:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:13.362 13:18:17 -- common/autotest_common.sh@10 -- # set +x 00:34:13.621 ************************************ 00:34:13.621 START TEST nvme_reset 00:34:13.621 ************************************ 00:34:13.621 13:18:17 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:34:13.880 Initializing NVMe Controllers 00:34:13.880 Skipping QEMU NVMe SSD at 0000:00:10.0 00:34:13.880 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:34:13.880 00:34:13.880 real 0m0.321s 00:34:13.880 user 0m0.124s 00:34:13.880 sys 0m0.128s 00:34:13.880 13:18:17 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:13.880 13:18:17 -- common/autotest_common.sh@10 -- # set +x 00:34:13.880 ************************************ 00:34:13.880 END TEST nvme_reset 00:34:13.880 ************************************ 00:34:13.880 13:18:17 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:34:13.880 13:18:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:13.880 13:18:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:13.880 13:18:17 -- common/autotest_common.sh@10 -- # set +x 00:34:13.880 ************************************ 00:34:13.880 START TEST nvme_identify 00:34:13.880 ************************************ 00:34:13.880 13:18:17 -- common/autotest_common.sh@1099 -- # nvme_identify 00:34:13.880 13:18:17 -- nvme/nvme.sh@12 -- # bdfs=() 00:34:13.880 13:18:17 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:34:13.880 13:18:17 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:34:13.880 13:18:17 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:34:13.880 13:18:17 -- common/autotest_common.sh@1487 -- # bdfs=() 00:34:13.880 13:18:17 -- common/autotest_common.sh@1487 -- # local bdfs 00:34:13.880 13:18:17 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:13.880 13:18:17 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:13.880 13:18:17 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:34:13.880 13:18:17 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:34:13.880 13:18:17 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:34:13.880 13:18:18 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:34:14.141 [2024-04-17 13:18:18.260341] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 149182 terminated unexpected 00:34:14.141 ===================================================== 00:34:14.141 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:14.141 ===================================================== 00:34:14.141 Controller Capabilities/Features 00:34:14.141 ================================ 00:34:14.141 Vendor ID: 1b36 00:34:14.141 Subsystem Vendor ID: 1af4 00:34:14.141 Serial Number: 12340 00:34:14.141 Model Number: QEMU NVMe Ctrl 00:34:14.141 Firmware Version: 8.0.0 00:34:14.141 Recommended Arb Burst: 6 00:34:14.141 IEEE OUI Identifier: 00 54 52 00:34:14.141 Multi-path I/O 00:34:14.141 May have multiple subsystem ports: No 00:34:14.141 May have multiple controllers: No 00:34:14.141 Associated with SR-IOV VF: No 00:34:14.141 Max Data Transfer Size: 524288 00:34:14.141 Max Number of Namespaces: 256 00:34:14.141 Max Number of I/O Queues: 64 00:34:14.141 NVMe Specification Version (VS): 1.4 00:34:14.141 NVMe Specification Version (Identify): 1.4 00:34:14.141 Maximum Queue Entries: 2048 00:34:14.141 Contiguous Queues Required: Yes 00:34:14.141 Arbitration Mechanisms Supported 00:34:14.141 Weighted Round Robin: Not Supported 00:34:14.141 Vendor Specific: Not Supported 00:34:14.141 Reset Timeout: 7500 ms 00:34:14.141 Doorbell Stride: 4 bytes 00:34:14.141 NVM Subsystem Reset: Not Supported 00:34:14.141 Command Sets Supported 00:34:14.141 NVM Command Set: Supported 00:34:14.141 Boot Partition: Not Supported 00:34:14.141 Memory Page Size Minimum: 4096 bytes 00:34:14.141 Memory Page Size Maximum: 65536 bytes 00:34:14.141 Persistent Memory Region: Not Supported 00:34:14.141 Optional Asynchronous Events Supported 00:34:14.141 Namespace Attribute Notices: Supported 00:34:14.141 Firmware Activation Notices: Not Supported 00:34:14.141 ANA Change Notices: Not Supported 00:34:14.141 PLE Aggregate Log Change Notices: Not Supported 00:34:14.141 LBA Status Info Alert Notices: Not Supported 00:34:14.141 EGE Aggregate Log Change Notices: Not Supported 00:34:14.141 Normal NVM Subsystem Shutdown event: Not Supported 00:34:14.141 Zone Descriptor Change Notices: Not Supported 00:34:14.141 Discovery Log Change Notices: Not Supported 00:34:14.141 Controller Attributes 00:34:14.141 128-bit Host Identifier: Not Supported 00:34:14.141 Non-Operational Permissive Mode: Not Supported 00:34:14.141 NVM Sets: Not Supported 00:34:14.141 Read Recovery Levels: Not Supported 00:34:14.141 Endurance Groups: Not Supported 00:34:14.141 Predictable Latency Mode: Not Supported 00:34:14.141 Traffic Based Keep ALive: Not Supported 00:34:14.141 Namespace Granularity: Not Supported 00:34:14.141 SQ Associations: Not Supported 00:34:14.141 UUID List: Not Supported 00:34:14.141 Multi-Domain Subsystem: Not Supported 00:34:14.141 Fixed Capacity Management: Not Supported 00:34:14.141 Variable Capacity Management: Not Supported 00:34:14.141 Delete Endurance Group: Not Supported 00:34:14.141 Delete NVM Set: Not Supported 00:34:14.141 Extended LBA Formats Supported: Supported 00:34:14.141 Flexible Data Placement Supported: Not Supported 00:34:14.141 00:34:14.141 Controller Memory Buffer Support 00:34:14.141 ================================ 00:34:14.141 Supported: No 00:34:14.141 00:34:14.141 Persistent Memory Region Support 00:34:14.141 ================================ 00:34:14.141 Supported: No 00:34:14.141 00:34:14.141 Admin Command Set Attributes 00:34:14.141 ============================ 00:34:14.141 Security Send/Receive: Not Supported 00:34:14.141 Format NVM: Supported 00:34:14.141 Firmware Activate/Download: Not Supported 00:34:14.141 Namespace Management: Supported 00:34:14.141 Device Self-Test: Not Supported 00:34:14.141 Directives: Supported 00:34:14.141 NVMe-MI: Not Supported 00:34:14.141 Virtualization Management: Not Supported 00:34:14.141 Doorbell Buffer Config: Supported 00:34:14.141 Get LBA Status Capability: Not Supported 00:34:14.141 Command & Feature Lockdown Capability: Not Supported 00:34:14.141 Abort Command Limit: 4 00:34:14.141 Async Event Request Limit: 4 00:34:14.141 Number of Firmware Slots: N/A 00:34:14.141 Firmware Slot 1 Read-Only: N/A 00:34:14.141 Firmware Activation Without Reset: N/A 00:34:14.141 Multiple Update Detection Support: N/A 00:34:14.141 Firmware Update Granularity: No Information Provided 00:34:14.141 Per-Namespace SMART Log: Yes 00:34:14.141 Asymmetric Namespace Access Log Page: Not Supported 00:34:14.141 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:34:14.141 Command Effects Log Page: Supported 00:34:14.141 Get Log Page Extended Data: Supported 00:34:14.141 Telemetry Log Pages: Not Supported 00:34:14.141 Persistent Event Log Pages: Not Supported 00:34:14.141 Supported Log Pages Log Page: May Support 00:34:14.141 Commands Supported & Effects Log Page: Not Supported 00:34:14.141 Feature Identifiers & Effects Log Page:May Support 00:34:14.141 NVMe-MI Commands & Effects Log Page: May Support 00:34:14.141 Data Area 4 for Telemetry Log: Not Supported 00:34:14.141 Error Log Page Entries Supported: 1 00:34:14.141 Keep Alive: Not Supported 00:34:14.141 00:34:14.141 NVM Command Set Attributes 00:34:14.141 ========================== 00:34:14.141 Submission Queue Entry Size 00:34:14.141 Max: 64 00:34:14.141 Min: 64 00:34:14.141 Completion Queue Entry Size 00:34:14.141 Max: 16 00:34:14.141 Min: 16 00:34:14.141 Number of Namespaces: 256 00:34:14.141 Compare Command: Supported 00:34:14.141 Write Uncorrectable Command: Not Supported 00:34:14.141 Dataset Management Command: Supported 00:34:14.141 Write Zeroes Command: Supported 00:34:14.141 Set Features Save Field: Supported 00:34:14.141 Reservations: Not Supported 00:34:14.141 Timestamp: Supported 00:34:14.141 Copy: Supported 00:34:14.141 Volatile Write Cache: Present 00:34:14.141 Atomic Write Unit (Normal): 1 00:34:14.141 Atomic Write Unit (PFail): 1 00:34:14.141 Atomic Compare & Write Unit: 1 00:34:14.141 Fused Compare & Write: Not Supported 00:34:14.141 Scatter-Gather List 00:34:14.141 SGL Command Set: Supported 00:34:14.141 SGL Keyed: Not Supported 00:34:14.141 SGL Bit Bucket Descriptor: Not Supported 00:34:14.141 SGL Metadata Pointer: Not Supported 00:34:14.141 Oversized SGL: Not Supported 00:34:14.141 SGL Metadata Address: Not Supported 00:34:14.141 SGL Offset: Not Supported 00:34:14.141 Transport SGL Data Block: Not Supported 00:34:14.141 Replay Protected Memory Block: Not Supported 00:34:14.141 00:34:14.141 Firmware Slot Information 00:34:14.141 ========================= 00:34:14.141 Active slot: 1 00:34:14.141 Slot 1 Firmware Revision: 1.0 00:34:14.141 00:34:14.141 00:34:14.141 Commands Supported and Effects 00:34:14.141 ============================== 00:34:14.141 Admin Commands 00:34:14.141 -------------- 00:34:14.141 Delete I/O Submission Queue (00h): Supported 00:34:14.141 Create I/O Submission Queue (01h): Supported 00:34:14.141 Get Log Page (02h): Supported 00:34:14.141 Delete I/O Completion Queue (04h): Supported 00:34:14.141 Create I/O Completion Queue (05h): Supported 00:34:14.141 Identify (06h): Supported 00:34:14.141 Abort (08h): Supported 00:34:14.141 Set Features (09h): Supported 00:34:14.142 Get Features (0Ah): Supported 00:34:14.142 Asynchronous Event Request (0Ch): Supported 00:34:14.142 Namespace Attachment (15h): Supported NS-Inventory-Change 00:34:14.142 Directive Send (19h): Supported 00:34:14.142 Directive Receive (1Ah): Supported 00:34:14.142 Virtualization Management (1Ch): Supported 00:34:14.142 Doorbell Buffer Config (7Ch): Supported 00:34:14.142 Format NVM (80h): Supported LBA-Change 00:34:14.142 I/O Commands 00:34:14.142 ------------ 00:34:14.142 Flush (00h): Supported LBA-Change 00:34:14.142 Write (01h): Supported LBA-Change 00:34:14.142 Read (02h): Supported 00:34:14.142 Compare (05h): Supported 00:34:14.142 Write Zeroes (08h): Supported LBA-Change 00:34:14.142 Dataset Management (09h): Supported LBA-Change 00:34:14.142 Unknown (0Ch): Supported 00:34:14.142 Unknown (12h): Supported 00:34:14.142 Copy (19h): Supported LBA-Change 00:34:14.142 Unknown (1Dh): Supported LBA-Change 00:34:14.142 00:34:14.142 Error Log 00:34:14.142 ========= 00:34:14.142 00:34:14.142 Arbitration 00:34:14.142 =========== 00:34:14.142 Arbitration Burst: no limit 00:34:14.142 00:34:14.142 Power Management 00:34:14.142 ================ 00:34:14.142 Number of Power States: 1 00:34:14.142 Current Power State: Power State #0 00:34:14.142 Power State #0: 00:34:14.142 Max Power: 25.00 W 00:34:14.142 Non-Operational State: Operational 00:34:14.142 Entry Latency: 16 microseconds 00:34:14.142 Exit Latency: 4 microseconds 00:34:14.142 Relative Read Throughput: 0 00:34:14.142 Relative Read Latency: 0 00:34:14.142 Relative Write Throughput: 0 00:34:14.142 Relative Write Latency: 0 00:34:14.401 Idle Power: Not Reported 00:34:14.401 Active Power: Not Reported 00:34:14.401 Non-Operational Permissive Mode: Not Supported 00:34:14.401 00:34:14.401 Health Information 00:34:14.401 ================== 00:34:14.401 Critical Warnings: 00:34:14.401 Available Spare Space: OK 00:34:14.401 Temperature: OK 00:34:14.401 Device Reliability: OK 00:34:14.401 Read Only: No 00:34:14.401 Volatile Memory Backup: OK 00:34:14.401 Current Temperature: 323 Kelvin (50 Celsius) 00:34:14.401 Temperature Threshold: 343 Kelvin (70 Celsius) 00:34:14.401 Available Spare: 0% 00:34:14.401 Available Spare Threshold: 0% 00:34:14.401 Life Percentage Used: 0% 00:34:14.401 Data Units Read: 4340 00:34:14.401 Data Units Written: 3986 00:34:14.401 Host Read Commands: 221946 00:34:14.401 Host Write Commands: 234733 00:34:14.401 Controller Busy Time: 0 minutes 00:34:14.401 Power Cycles: 0 00:34:14.401 Power On Hours: 0 hours 00:34:14.401 Unsafe Shutdowns: 0 00:34:14.401 Unrecoverable Media Errors: 0 00:34:14.401 Lifetime Error Log Entries: 0 00:34:14.401 Warning Temperature Time: 0 minutes 00:34:14.401 Critical Temperature Time: 0 minutes 00:34:14.401 00:34:14.401 Number of Queues 00:34:14.401 ================ 00:34:14.401 Number of I/O Submission Queues: 64 00:34:14.401 Number of I/O Completion Queues: 64 00:34:14.401 00:34:14.401 ZNS Specific Controller Data 00:34:14.401 ============================ 00:34:14.401 Zone Append Size Limit: 0 00:34:14.401 00:34:14.401 00:34:14.401 Active Namespaces 00:34:14.401 ================= 00:34:14.401 Namespace ID:1 00:34:14.401 Error Recovery Timeout: Unlimited 00:34:14.401 Command Set Identifier: NVM (00h) 00:34:14.401 Deallocate: Supported 00:34:14.401 Deallocated/Unwritten Error: Supported 00:34:14.401 Deallocated Read Value: All 0x00 00:34:14.401 Deallocate in Write Zeroes: Not Supported 00:34:14.401 Deallocated Guard Field: 0xFFFF 00:34:14.401 Flush: Supported 00:34:14.401 Reservation: Not Supported 00:34:14.401 Namespace Sharing Capabilities: Private 00:34:14.401 Size (in LBAs): 1310720 (5GiB) 00:34:14.401 Capacity (in LBAs): 1310720 (5GiB) 00:34:14.401 Utilization (in LBAs): 1310720 (5GiB) 00:34:14.401 Thin Provisioning: Not Supported 00:34:14.401 Per-NS Atomic Units: No 00:34:14.401 Maximum Single Source Range Length: 128 00:34:14.401 Maximum Copy Length: 128 00:34:14.401 Maximum Source Range Count: 128 00:34:14.401 NGUID/EUI64 Never Reused: No 00:34:14.401 Namespace Write Protected: No 00:34:14.401 Number of LBA Formats: 8 00:34:14.401 Current LBA Format: LBA Format #04 00:34:14.401 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:14.401 LBA Format #01: Data Size: 512 Metadata Size: 8 00:34:14.401 LBA Format #02: Data Size: 512 Metadata Size: 16 00:34:14.401 LBA Format #03: Data Size: 512 Metadata Size: 64 00:34:14.401 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:34:14.401 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:34:14.401 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:34:14.401 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:34:14.401 00:34:14.401 13:18:18 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:34:14.401 13:18:18 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:34:14.662 ===================================================== 00:34:14.662 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:14.662 ===================================================== 00:34:14.662 Controller Capabilities/Features 00:34:14.662 ================================ 00:34:14.662 Vendor ID: 1b36 00:34:14.662 Subsystem Vendor ID: 1af4 00:34:14.662 Serial Number: 12340 00:34:14.662 Model Number: QEMU NVMe Ctrl 00:34:14.662 Firmware Version: 8.0.0 00:34:14.662 Recommended Arb Burst: 6 00:34:14.662 IEEE OUI Identifier: 00 54 52 00:34:14.662 Multi-path I/O 00:34:14.662 May have multiple subsystem ports: No 00:34:14.662 May have multiple controllers: No 00:34:14.662 Associated with SR-IOV VF: No 00:34:14.662 Max Data Transfer Size: 524288 00:34:14.662 Max Number of Namespaces: 256 00:34:14.662 Max Number of I/O Queues: 64 00:34:14.662 NVMe Specification Version (VS): 1.4 00:34:14.662 NVMe Specification Version (Identify): 1.4 00:34:14.662 Maximum Queue Entries: 2048 00:34:14.662 Contiguous Queues Required: Yes 00:34:14.662 Arbitration Mechanisms Supported 00:34:14.662 Weighted Round Robin: Not Supported 00:34:14.662 Vendor Specific: Not Supported 00:34:14.662 Reset Timeout: 7500 ms 00:34:14.662 Doorbell Stride: 4 bytes 00:34:14.662 NVM Subsystem Reset: Not Supported 00:34:14.662 Command Sets Supported 00:34:14.662 NVM Command Set: Supported 00:34:14.662 Boot Partition: Not Supported 00:34:14.662 Memory Page Size Minimum: 4096 bytes 00:34:14.662 Memory Page Size Maximum: 65536 bytes 00:34:14.662 Persistent Memory Region: Not Supported 00:34:14.662 Optional Asynchronous Events Supported 00:34:14.662 Namespace Attribute Notices: Supported 00:34:14.662 Firmware Activation Notices: Not Supported 00:34:14.662 ANA Change Notices: Not Supported 00:34:14.662 PLE Aggregate Log Change Notices: Not Supported 00:34:14.662 LBA Status Info Alert Notices: Not Supported 00:34:14.662 EGE Aggregate Log Change Notices: Not Supported 00:34:14.662 Normal NVM Subsystem Shutdown event: Not Supported 00:34:14.662 Zone Descriptor Change Notices: Not Supported 00:34:14.662 Discovery Log Change Notices: Not Supported 00:34:14.662 Controller Attributes 00:34:14.662 128-bit Host Identifier: Not Supported 00:34:14.662 Non-Operational Permissive Mode: Not Supported 00:34:14.662 NVM Sets: Not Supported 00:34:14.662 Read Recovery Levels: Not Supported 00:34:14.662 Endurance Groups: Not Supported 00:34:14.662 Predictable Latency Mode: Not Supported 00:34:14.662 Traffic Based Keep ALive: Not Supported 00:34:14.662 Namespace Granularity: Not Supported 00:34:14.662 SQ Associations: Not Supported 00:34:14.662 UUID List: Not Supported 00:34:14.662 Multi-Domain Subsystem: Not Supported 00:34:14.662 Fixed Capacity Management: Not Supported 00:34:14.662 Variable Capacity Management: Not Supported 00:34:14.662 Delete Endurance Group: Not Supported 00:34:14.662 Delete NVM Set: Not Supported 00:34:14.662 Extended LBA Formats Supported: Supported 00:34:14.662 Flexible Data Placement Supported: Not Supported 00:34:14.662 00:34:14.662 Controller Memory Buffer Support 00:34:14.662 ================================ 00:34:14.662 Supported: No 00:34:14.662 00:34:14.662 Persistent Memory Region Support 00:34:14.662 ================================ 00:34:14.662 Supported: No 00:34:14.662 00:34:14.662 Admin Command Set Attributes 00:34:14.662 ============================ 00:34:14.662 Security Send/Receive: Not Supported 00:34:14.662 Format NVM: Supported 00:34:14.662 Firmware Activate/Download: Not Supported 00:34:14.662 Namespace Management: Supported 00:34:14.662 Device Self-Test: Not Supported 00:34:14.662 Directives: Supported 00:34:14.662 NVMe-MI: Not Supported 00:34:14.662 Virtualization Management: Not Supported 00:34:14.662 Doorbell Buffer Config: Supported 00:34:14.662 Get LBA Status Capability: Not Supported 00:34:14.662 Command & Feature Lockdown Capability: Not Supported 00:34:14.662 Abort Command Limit: 4 00:34:14.662 Async Event Request Limit: 4 00:34:14.662 Number of Firmware Slots: N/A 00:34:14.662 Firmware Slot 1 Read-Only: N/A 00:34:14.662 Firmware Activation Without Reset: N/A 00:34:14.662 Multiple Update Detection Support: N/A 00:34:14.662 Firmware Update Granularity: No Information Provided 00:34:14.662 Per-Namespace SMART Log: Yes 00:34:14.662 Asymmetric Namespace Access Log Page: Not Supported 00:34:14.662 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:34:14.662 Command Effects Log Page: Supported 00:34:14.662 Get Log Page Extended Data: Supported 00:34:14.662 Telemetry Log Pages: Not Supported 00:34:14.662 Persistent Event Log Pages: Not Supported 00:34:14.662 Supported Log Pages Log Page: May Support 00:34:14.662 Commands Supported & Effects Log Page: Not Supported 00:34:14.662 Feature Identifiers & Effects Log Page:May Support 00:34:14.662 NVMe-MI Commands & Effects Log Page: May Support 00:34:14.662 Data Area 4 for Telemetry Log: Not Supported 00:34:14.662 Error Log Page Entries Supported: 1 00:34:14.662 Keep Alive: Not Supported 00:34:14.662 00:34:14.662 NVM Command Set Attributes 00:34:14.662 ========================== 00:34:14.662 Submission Queue Entry Size 00:34:14.662 Max: 64 00:34:14.662 Min: 64 00:34:14.662 Completion Queue Entry Size 00:34:14.662 Max: 16 00:34:14.662 Min: 16 00:34:14.662 Number of Namespaces: 256 00:34:14.662 Compare Command: Supported 00:34:14.662 Write Uncorrectable Command: Not Supported 00:34:14.662 Dataset Management Command: Supported 00:34:14.662 Write Zeroes Command: Supported 00:34:14.662 Set Features Save Field: Supported 00:34:14.662 Reservations: Not Supported 00:34:14.662 Timestamp: Supported 00:34:14.662 Copy: Supported 00:34:14.662 Volatile Write Cache: Present 00:34:14.662 Atomic Write Unit (Normal): 1 00:34:14.662 Atomic Write Unit (PFail): 1 00:34:14.662 Atomic Compare & Write Unit: 1 00:34:14.662 Fused Compare & Write: Not Supported 00:34:14.662 Scatter-Gather List 00:34:14.662 SGL Command Set: Supported 00:34:14.662 SGL Keyed: Not Supported 00:34:14.662 SGL Bit Bucket Descriptor: Not Supported 00:34:14.662 SGL Metadata Pointer: Not Supported 00:34:14.662 Oversized SGL: Not Supported 00:34:14.662 SGL Metadata Address: Not Supported 00:34:14.662 SGL Offset: Not Supported 00:34:14.662 Transport SGL Data Block: Not Supported 00:34:14.662 Replay Protected Memory Block: Not Supported 00:34:14.662 00:34:14.662 Firmware Slot Information 00:34:14.662 ========================= 00:34:14.662 Active slot: 1 00:34:14.662 Slot 1 Firmware Revision: 1.0 00:34:14.662 00:34:14.662 00:34:14.662 Commands Supported and Effects 00:34:14.662 ============================== 00:34:14.662 Admin Commands 00:34:14.662 -------------- 00:34:14.662 Delete I/O Submission Queue (00h): Supported 00:34:14.662 Create I/O Submission Queue (01h): Supported 00:34:14.662 Get Log Page (02h): Supported 00:34:14.662 Delete I/O Completion Queue (04h): Supported 00:34:14.662 Create I/O Completion Queue (05h): Supported 00:34:14.662 Identify (06h): Supported 00:34:14.662 Abort (08h): Supported 00:34:14.662 Set Features (09h): Supported 00:34:14.663 Get Features (0Ah): Supported 00:34:14.663 Asynchronous Event Request (0Ch): Supported 00:34:14.663 Namespace Attachment (15h): Supported NS-Inventory-Change 00:34:14.663 Directive Send (19h): Supported 00:34:14.663 Directive Receive (1Ah): Supported 00:34:14.663 Virtualization Management (1Ch): Supported 00:34:14.663 Doorbell Buffer Config (7Ch): Supported 00:34:14.663 Format NVM (80h): Supported LBA-Change 00:34:14.663 I/O Commands 00:34:14.663 ------------ 00:34:14.663 Flush (00h): Supported LBA-Change 00:34:14.663 Write (01h): Supported LBA-Change 00:34:14.663 Read (02h): Supported 00:34:14.663 Compare (05h): Supported 00:34:14.663 Write Zeroes (08h): Supported LBA-Change 00:34:14.663 Dataset Management (09h): Supported LBA-Change 00:34:14.663 Unknown (0Ch): Supported 00:34:14.663 Unknown (12h): Supported 00:34:14.663 Copy (19h): Supported LBA-Change 00:34:14.663 Unknown (1Dh): Supported LBA-Change 00:34:14.663 00:34:14.663 Error Log 00:34:14.663 ========= 00:34:14.663 00:34:14.663 Arbitration 00:34:14.663 =========== 00:34:14.663 Arbitration Burst: no limit 00:34:14.663 00:34:14.663 Power Management 00:34:14.663 ================ 00:34:14.663 Number of Power States: 1 00:34:14.663 Current Power State: Power State #0 00:34:14.663 Power State #0: 00:34:14.663 Max Power: 25.00 W 00:34:14.663 Non-Operational State: Operational 00:34:14.663 Entry Latency: 16 microseconds 00:34:14.663 Exit Latency: 4 microseconds 00:34:14.663 Relative Read Throughput: 0 00:34:14.663 Relative Read Latency: 0 00:34:14.663 Relative Write Throughput: 0 00:34:14.663 Relative Write Latency: 0 00:34:14.663 Idle Power: Not Reported 00:34:14.663 Active Power: Not Reported 00:34:14.663 Non-Operational Permissive Mode: Not Supported 00:34:14.663 00:34:14.663 Health Information 00:34:14.663 ================== 00:34:14.663 Critical Warnings: 00:34:14.663 Available Spare Space: OK 00:34:14.663 Temperature: OK 00:34:14.663 Device Reliability: OK 00:34:14.663 Read Only: No 00:34:14.663 Volatile Memory Backup: OK 00:34:14.663 Current Temperature: 323 Kelvin (50 Celsius) 00:34:14.663 Temperature Threshold: 343 Kelvin (70 Celsius) 00:34:14.663 Available Spare: 0% 00:34:14.663 Available Spare Threshold: 0% 00:34:14.663 Life Percentage Used: 0% 00:34:14.663 Data Units Read: 4340 00:34:14.663 Data Units Written: 3986 00:34:14.663 Host Read Commands: 221946 00:34:14.663 Host Write Commands: 234733 00:34:14.663 Controller Busy Time: 0 minutes 00:34:14.663 Power Cycles: 0 00:34:14.663 Power On Hours: 0 hours 00:34:14.663 Unsafe Shutdowns: 0 00:34:14.663 Unrecoverable Media Errors: 0 00:34:14.663 Lifetime Error Log Entries: 0 00:34:14.663 Warning Temperature Time: 0 minutes 00:34:14.663 Critical Temperature Time: 0 minutes 00:34:14.663 00:34:14.663 Number of Queues 00:34:14.663 ================ 00:34:14.663 Number of I/O Submission Queues: 64 00:34:14.663 Number of I/O Completion Queues: 64 00:34:14.663 00:34:14.663 ZNS Specific Controller Data 00:34:14.663 ============================ 00:34:14.663 Zone Append Size Limit: 0 00:34:14.663 00:34:14.663 00:34:14.663 Active Namespaces 00:34:14.663 ================= 00:34:14.663 Namespace ID:1 00:34:14.663 Error Recovery Timeout: Unlimited 00:34:14.663 Command Set Identifier: NVM (00h) 00:34:14.663 Deallocate: Supported 00:34:14.663 Deallocated/Unwritten Error: Supported 00:34:14.663 Deallocated Read Value: All 0x00 00:34:14.663 Deallocate in Write Zeroes: Not Supported 00:34:14.663 Deallocated Guard Field: 0xFFFF 00:34:14.663 Flush: Supported 00:34:14.663 Reservation: Not Supported 00:34:14.663 Namespace Sharing Capabilities: Private 00:34:14.663 Size (in LBAs): 1310720 (5GiB) 00:34:14.663 Capacity (in LBAs): 1310720 (5GiB) 00:34:14.663 Utilization (in LBAs): 1310720 (5GiB) 00:34:14.663 Thin Provisioning: Not Supported 00:34:14.663 Per-NS Atomic Units: No 00:34:14.663 Maximum Single Source Range Length: 128 00:34:14.663 Maximum Copy Length: 128 00:34:14.663 Maximum Source Range Count: 128 00:34:14.663 NGUID/EUI64 Never Reused: No 00:34:14.663 Namespace Write Protected: No 00:34:14.663 Number of LBA Formats: 8 00:34:14.663 Current LBA Format: LBA Format #04 00:34:14.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:14.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:34:14.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:34:14.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:34:14.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:34:14.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:34:14.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:34:14.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:34:14.663 00:34:14.663 ************************************ 00:34:14.663 END TEST nvme_identify 00:34:14.663 ************************************ 00:34:14.663 00:34:14.663 real 0m0.711s 00:34:14.663 user 0m0.286s 00:34:14.663 sys 0m0.293s 00:34:14.663 13:18:18 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:14.663 13:18:18 -- common/autotest_common.sh@10 -- # set +x 00:34:14.663 13:18:18 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:34:14.663 13:18:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:14.663 13:18:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:14.663 13:18:18 -- common/autotest_common.sh@10 -- # set +x 00:34:14.663 ************************************ 00:34:14.663 START TEST nvme_perf 00:34:14.663 ************************************ 00:34:14.663 13:18:18 -- common/autotest_common.sh@1099 -- # nvme_perf 00:34:14.663 13:18:18 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:34:16.043 Initializing NVMe Controllers 00:34:16.043 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:16.043 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:34:16.043 Initialization complete. Launching workers. 00:34:16.043 ======================================================== 00:34:16.043 Latency(us) 00:34:16.043 Device Information : IOPS MiB/s Average min max 00:34:16.043 PCIE (0000:00:10.0) NSID 1 from core 0: 84904.92 994.98 1505.89 649.45 6989.20 00:34:16.043 ======================================================== 00:34:16.043 Total : 84904.92 994.98 1505.89 649.45 6989.20 00:34:16.043 00:34:16.043 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:34:16.043 ================================================================================= 00:34:16.043 1.00000% : 804.305us 00:34:16.043 10.00000% : 975.593us 00:34:16.043 25.00000% : 1169.222us 00:34:16.043 50.00000% : 1459.665us 00:34:16.043 75.00000% : 1750.109us 00:34:16.043 90.00000% : 2070.342us 00:34:16.043 95.00000% : 2383.127us 00:34:16.043 98.00000% : 2651.229us 00:34:16.043 99.00000% : 2785.280us 00:34:16.043 99.50000% : 3232.116us 00:34:16.043 99.90000% : 4438.575us 00:34:16.043 99.99000% : 6702.545us 00:34:16.043 99.99900% : 7000.436us 00:34:16.043 99.99990% : 7000.436us 00:34:16.043 99.99999% : 7000.436us 00:34:16.043 00:34:16.043 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:34:16.043 ============================================================================== 00:34:16.043 Range in us Cumulative IO count 00:34:16.043 647.913 - 651.636: 0.0024% ( 2) 00:34:16.043 651.636 - 655.360: 0.0035% ( 1) 00:34:16.043 655.360 - 659.084: 0.0047% ( 1) 00:34:16.043 666.531 - 670.255: 0.0059% ( 1) 00:34:16.043 670.255 - 673.978: 0.0071% ( 1) 00:34:16.043 677.702 - 681.425: 0.0094% ( 2) 00:34:16.043 681.425 - 685.149: 0.0106% ( 1) 00:34:16.043 685.149 - 688.873: 0.0118% ( 1) 00:34:16.043 688.873 - 692.596: 0.0153% ( 3) 00:34:16.043 692.596 - 696.320: 0.0188% ( 3) 00:34:16.043 696.320 - 700.044: 0.0235% ( 4) 00:34:16.043 700.044 - 703.767: 0.0259% ( 2) 00:34:16.043 703.767 - 707.491: 0.0306% ( 4) 00:34:16.043 707.491 - 711.215: 0.0365% ( 5) 00:34:16.043 711.215 - 714.938: 0.0471% ( 9) 00:34:16.043 714.938 - 718.662: 0.0577% ( 9) 00:34:16.043 718.662 - 722.385: 0.0683% ( 9) 00:34:16.043 722.385 - 726.109: 0.0718% ( 3) 00:34:16.043 726.109 - 729.833: 0.0848% ( 11) 00:34:16.043 729.833 - 733.556: 0.1013% ( 14) 00:34:16.043 733.556 - 737.280: 0.1177% ( 14) 00:34:16.043 737.280 - 741.004: 0.1295% ( 10) 00:34:16.043 741.004 - 744.727: 0.1543% ( 21) 00:34:16.043 744.727 - 748.451: 0.1849% ( 26) 00:34:16.043 748.451 - 752.175: 0.2167% ( 27) 00:34:16.043 752.175 - 755.898: 0.2461% ( 25) 00:34:16.043 755.898 - 759.622: 0.2720% ( 22) 00:34:16.043 759.622 - 763.345: 0.3144% ( 36) 00:34:16.043 763.345 - 767.069: 0.3521% ( 32) 00:34:16.043 767.069 - 770.793: 0.4051% ( 45) 00:34:16.043 770.793 - 774.516: 0.4439% ( 33) 00:34:16.043 774.516 - 778.240: 0.5028% ( 50) 00:34:16.043 778.240 - 781.964: 0.5852% ( 70) 00:34:16.043 781.964 - 785.687: 0.6629% ( 66) 00:34:16.043 785.687 - 789.411: 0.7430% ( 68) 00:34:16.043 789.411 - 793.135: 0.8113% ( 58) 00:34:16.043 793.135 - 796.858: 0.9078% ( 82) 00:34:16.043 796.858 - 800.582: 0.9997% ( 78) 00:34:16.043 800.582 - 804.305: 1.0974% ( 83) 00:34:16.043 804.305 - 808.029: 1.1940% ( 82) 00:34:16.043 808.029 - 811.753: 1.3023% ( 92) 00:34:16.043 811.753 - 815.476: 1.4165% ( 97) 00:34:16.043 815.476 - 819.200: 1.5449% ( 109) 00:34:16.043 819.200 - 822.924: 1.6779% ( 113) 00:34:16.043 822.924 - 826.647: 1.7957% ( 100) 00:34:16.043 826.647 - 830.371: 1.9393% ( 122) 00:34:16.043 830.371 - 834.095: 2.0841% ( 123) 00:34:16.043 834.095 - 837.818: 2.2513% ( 142) 00:34:16.043 837.818 - 841.542: 2.4021% ( 128) 00:34:16.043 841.542 - 845.265: 2.5540% ( 129) 00:34:16.043 845.265 - 848.989: 2.7212% ( 142) 00:34:16.043 848.989 - 852.713: 2.9013% ( 153) 00:34:16.043 852.713 - 856.436: 3.0897% ( 160) 00:34:16.043 856.436 - 860.160: 3.2852% ( 166) 00:34:16.043 860.160 - 863.884: 3.4524% ( 142) 00:34:16.043 863.884 - 867.607: 3.6502% ( 168) 00:34:16.043 867.607 - 871.331: 3.8786% ( 194) 00:34:16.043 871.331 - 875.055: 4.0812% ( 172) 00:34:16.044 875.055 - 878.778: 4.2778% ( 167) 00:34:16.044 878.778 - 882.502: 4.4874% ( 178) 00:34:16.044 882.502 - 886.225: 4.6970% ( 178) 00:34:16.044 886.225 - 889.949: 4.9019% ( 174) 00:34:16.044 889.949 - 893.673: 5.1291% ( 193) 00:34:16.044 893.673 - 897.396: 5.3305% ( 171) 00:34:16.044 897.396 - 901.120: 5.5518% ( 188) 00:34:16.044 901.120 - 904.844: 5.7685% ( 184) 00:34:16.044 904.844 - 908.567: 5.9969% ( 194) 00:34:16.044 908.567 - 912.291: 6.2112% ( 182) 00:34:16.044 912.291 - 916.015: 6.4585% ( 210) 00:34:16.044 916.015 - 919.738: 6.6787% ( 187) 00:34:16.044 919.738 - 923.462: 6.8847% ( 175) 00:34:16.044 923.462 - 927.185: 7.1273% ( 206) 00:34:16.044 927.185 - 930.909: 7.3640% ( 201) 00:34:16.044 930.909 - 934.633: 7.5900% ( 192) 00:34:16.044 934.633 - 938.356: 7.8361% ( 209) 00:34:16.044 938.356 - 942.080: 8.0905% ( 216) 00:34:16.044 942.080 - 945.804: 8.3130% ( 189) 00:34:16.044 945.804 - 949.527: 8.5344% ( 188) 00:34:16.044 949.527 - 953.251: 8.7510% ( 184) 00:34:16.044 953.251 - 960.698: 9.2079% ( 388) 00:34:16.044 960.698 - 968.145: 9.7036% ( 421) 00:34:16.044 968.145 - 975.593: 10.1628% ( 390) 00:34:16.044 975.593 - 983.040: 10.6491% ( 413) 00:34:16.044 983.040 - 990.487: 11.1719% ( 444) 00:34:16.044 990.487 - 997.935: 11.6594% ( 414) 00:34:16.044 997.935 - 1005.382: 12.1869% ( 448) 00:34:16.044 1005.382 - 1012.829: 12.6933% ( 430) 00:34:16.044 1012.829 - 1020.276: 13.2455% ( 469) 00:34:16.044 1020.276 - 1027.724: 13.7836% ( 457) 00:34:16.044 1027.724 - 1035.171: 14.3194% ( 455) 00:34:16.044 1035.171 - 1042.618: 14.9234% ( 513) 00:34:16.044 1042.618 - 1050.065: 15.4698% ( 464) 00:34:16.044 1050.065 - 1057.513: 16.0597% ( 501) 00:34:16.044 1057.513 - 1064.960: 16.6331% ( 487) 00:34:16.044 1064.960 - 1072.407: 17.2065% ( 487) 00:34:16.044 1072.407 - 1079.855: 17.7764% ( 484) 00:34:16.044 1079.855 - 1087.302: 18.3840% ( 516) 00:34:16.044 1087.302 - 1094.749: 18.9928% ( 517) 00:34:16.044 1094.749 - 1102.196: 19.6263% ( 538) 00:34:16.044 1102.196 - 1109.644: 20.2444% ( 525) 00:34:16.044 1109.644 - 1117.091: 20.8779% ( 538) 00:34:16.044 1117.091 - 1124.538: 21.4949% ( 524) 00:34:16.044 1124.538 - 1131.985: 22.1331% ( 542) 00:34:16.044 1131.985 - 1139.433: 22.7701% ( 541) 00:34:16.044 1139.433 - 1146.880: 23.3789% ( 517) 00:34:16.044 1146.880 - 1154.327: 24.0206% ( 545) 00:34:16.044 1154.327 - 1161.775: 24.6482% ( 533) 00:34:16.044 1161.775 - 1169.222: 25.2994% ( 553) 00:34:16.044 1169.222 - 1176.669: 25.9340% ( 539) 00:34:16.044 1176.669 - 1184.116: 26.5899% ( 557) 00:34:16.044 1184.116 - 1191.564: 27.2340% ( 547) 00:34:16.044 1191.564 - 1199.011: 27.8592% ( 531) 00:34:16.044 1199.011 - 1206.458: 28.4939% ( 539) 00:34:16.044 1206.458 - 1213.905: 29.1333% ( 543) 00:34:16.044 1213.905 - 1221.353: 29.7938% ( 561) 00:34:16.044 1221.353 - 1228.800: 30.4049% ( 519) 00:34:16.044 1228.800 - 1236.247: 31.0749% ( 569) 00:34:16.044 1236.247 - 1243.695: 31.7061% ( 536) 00:34:16.044 1243.695 - 1251.142: 32.3525% ( 549) 00:34:16.044 1251.142 - 1258.589: 32.9565% ( 513) 00:34:16.044 1258.589 - 1266.036: 33.6253% ( 568) 00:34:16.044 1266.036 - 1273.484: 34.2659% ( 544) 00:34:16.044 1273.484 - 1280.931: 34.8900% ( 530) 00:34:16.044 1280.931 - 1288.378: 35.5529% ( 563) 00:34:16.044 1288.378 - 1295.825: 36.1993% ( 549) 00:34:16.044 1295.825 - 1303.273: 36.8399% ( 544) 00:34:16.044 1303.273 - 1310.720: 37.4710% ( 536) 00:34:16.044 1310.720 - 1318.167: 38.1222% ( 553) 00:34:16.044 1318.167 - 1325.615: 38.7686% ( 549) 00:34:16.044 1325.615 - 1333.062: 39.4421% ( 572) 00:34:16.044 1333.062 - 1340.509: 40.0532% ( 519) 00:34:16.044 1340.509 - 1347.956: 40.7456% ( 588) 00:34:16.044 1347.956 - 1355.404: 41.3461% ( 510) 00:34:16.044 1355.404 - 1362.851: 42.0067% ( 561) 00:34:16.044 1362.851 - 1370.298: 42.6531% ( 549) 00:34:16.044 1370.298 - 1377.745: 43.3042% ( 553) 00:34:16.044 1377.745 - 1385.193: 43.9366% ( 537) 00:34:16.044 1385.193 - 1392.640: 44.5865% ( 552) 00:34:16.044 1392.640 - 1400.087: 45.2259% ( 543) 00:34:16.044 1400.087 - 1407.535: 45.8712% ( 548) 00:34:16.044 1407.535 - 1414.982: 46.5211% ( 552) 00:34:16.044 1414.982 - 1422.429: 47.1676% ( 549) 00:34:16.044 1422.429 - 1429.876: 47.8187% ( 553) 00:34:16.044 1429.876 - 1437.324: 48.4628% ( 547) 00:34:16.044 1437.324 - 1444.771: 49.0963% ( 538) 00:34:16.044 1444.771 - 1452.218: 49.7698% ( 572) 00:34:16.044 1452.218 - 1459.665: 50.4115% ( 545) 00:34:16.044 1459.665 - 1467.113: 51.0544% ( 546) 00:34:16.044 1467.113 - 1474.560: 51.7009% ( 549) 00:34:16.044 1474.560 - 1482.007: 52.3179% ( 524) 00:34:16.044 1482.007 - 1489.455: 52.9761% ( 559) 00:34:16.044 1489.455 - 1496.902: 53.6131% ( 541) 00:34:16.044 1496.902 - 1504.349: 54.2666% ( 555) 00:34:16.044 1504.349 - 1511.796: 54.9248% ( 559) 00:34:16.044 1511.796 - 1519.244: 55.5548% ( 535) 00:34:16.044 1519.244 - 1526.691: 56.2248% ( 569) 00:34:16.044 1526.691 - 1534.138: 56.8524% ( 533) 00:34:16.044 1534.138 - 1541.585: 57.5329% ( 578) 00:34:16.044 1541.585 - 1549.033: 58.1276% ( 505) 00:34:16.044 1549.033 - 1556.480: 58.7940% ( 566) 00:34:16.044 1556.480 - 1563.927: 59.4181% ( 530) 00:34:16.044 1563.927 - 1571.375: 60.0575% ( 543) 00:34:16.044 1571.375 - 1578.822: 60.6886% ( 536) 00:34:16.044 1578.822 - 1586.269: 61.3209% ( 537) 00:34:16.044 1586.269 - 1593.716: 61.9532% ( 537) 00:34:16.044 1593.716 - 1601.164: 62.5961% ( 546) 00:34:16.044 1601.164 - 1608.611: 63.2225% ( 532) 00:34:16.044 1608.611 - 1616.058: 63.8690% ( 549) 00:34:16.044 1616.058 - 1623.505: 64.5095% ( 544) 00:34:16.044 1623.505 - 1630.953: 65.1383% ( 534) 00:34:16.044 1630.953 - 1638.400: 65.7812% ( 546) 00:34:16.044 1638.400 - 1645.847: 66.4006% ( 526) 00:34:16.044 1645.847 - 1653.295: 67.0517% ( 553) 00:34:16.044 1653.295 - 1660.742: 67.6558% ( 513) 00:34:16.044 1660.742 - 1668.189: 68.3128% ( 558) 00:34:16.044 1668.189 - 1675.636: 68.9286% ( 523) 00:34:16.044 1675.636 - 1683.084: 69.5739% ( 548) 00:34:16.044 1683.084 - 1690.531: 70.1991% ( 531) 00:34:16.044 1690.531 - 1697.978: 70.8244% ( 531) 00:34:16.044 1697.978 - 1705.425: 71.4484% ( 530) 00:34:16.044 1705.425 - 1712.873: 72.0784% ( 535) 00:34:16.044 1712.873 - 1720.320: 72.7201% ( 545) 00:34:16.044 1720.320 - 1727.767: 73.2935% ( 487) 00:34:16.044 1727.767 - 1735.215: 73.9576% ( 564) 00:34:16.044 1735.215 - 1742.662: 74.5099% ( 469) 00:34:16.044 1742.662 - 1750.109: 75.1398% ( 535) 00:34:16.044 1750.109 - 1757.556: 75.7203% ( 493) 00:34:16.044 1757.556 - 1765.004: 76.3032% ( 495) 00:34:16.044 1765.004 - 1772.451: 76.9013% ( 508) 00:34:16.044 1772.451 - 1779.898: 77.4606% ( 475) 00:34:16.044 1779.898 - 1787.345: 78.0376% ( 490) 00:34:16.044 1787.345 - 1794.793: 78.5533% ( 438) 00:34:16.044 1794.793 - 1802.240: 79.0632% ( 433) 00:34:16.044 1802.240 - 1809.687: 79.5648% ( 426) 00:34:16.044 1809.687 - 1817.135: 80.0582% ( 419) 00:34:16.044 1817.135 - 1824.582: 80.5091% ( 383) 00:34:16.044 1824.582 - 1832.029: 80.9719% ( 393) 00:34:16.044 1832.029 - 1839.476: 81.4205% ( 381) 00:34:16.044 1839.476 - 1846.924: 81.8173% ( 337) 00:34:16.044 1846.924 - 1854.371: 82.2212% ( 343) 00:34:16.044 1854.371 - 1861.818: 82.6039% ( 325) 00:34:16.044 1861.818 - 1869.265: 82.9783% ( 318) 00:34:16.044 1869.265 - 1876.713: 83.3398% ( 307) 00:34:16.044 1876.713 - 1884.160: 83.6789% ( 288) 00:34:16.044 1884.160 - 1891.607: 84.0122% ( 283) 00:34:16.044 1891.607 - 1899.055: 84.3348% ( 274) 00:34:16.045 1899.055 - 1906.502: 84.6598% ( 276) 00:34:16.045 1906.502 - 1921.396: 85.2591% ( 509) 00:34:16.045 1921.396 - 1936.291: 85.8431% ( 496) 00:34:16.045 1936.291 - 1951.185: 86.3824% ( 458) 00:34:16.045 1951.185 - 1966.080: 86.9076% ( 446) 00:34:16.045 1966.080 - 1980.975: 87.4163% ( 432) 00:34:16.045 1980.975 - 1995.869: 87.9096% ( 419) 00:34:16.045 1995.869 - 2010.764: 88.3936% ( 411) 00:34:16.045 2010.764 - 2025.658: 88.8563% ( 393) 00:34:16.045 2025.658 - 2040.553: 89.2790% ( 359) 00:34:16.045 2040.553 - 2055.447: 89.6900% ( 349) 00:34:16.045 2055.447 - 2070.342: 90.0880% ( 338) 00:34:16.045 2070.342 - 2085.236: 90.4400% ( 299) 00:34:16.045 2085.236 - 2100.131: 90.7756% ( 285) 00:34:16.045 2100.131 - 2115.025: 91.1018% ( 277) 00:34:16.045 2115.025 - 2129.920: 91.4032% ( 256) 00:34:16.045 2129.920 - 2144.815: 91.6858% ( 240) 00:34:16.045 2144.815 - 2159.709: 91.9672% ( 239) 00:34:16.045 2159.709 - 2174.604: 92.2204% ( 215) 00:34:16.045 2174.604 - 2189.498: 92.4571% ( 201) 00:34:16.045 2189.498 - 2204.393: 92.6973% ( 204) 00:34:16.045 2204.393 - 2219.287: 92.9174% ( 187) 00:34:16.045 2219.287 - 2234.182: 93.1388% ( 188) 00:34:16.045 2234.182 - 2249.076: 93.3413% ( 172) 00:34:16.045 2249.076 - 2263.971: 93.5333% ( 163) 00:34:16.045 2263.971 - 2278.865: 93.7228% ( 161) 00:34:16.045 2278.865 - 2293.760: 93.9218% ( 169) 00:34:16.045 2293.760 - 2308.655: 94.1126% ( 162) 00:34:16.045 2308.655 - 2323.549: 94.2975% ( 157) 00:34:16.045 2323.549 - 2338.444: 94.4894% ( 163) 00:34:16.045 2338.444 - 2353.338: 94.6660% ( 150) 00:34:16.045 2353.338 - 2368.233: 94.8497% ( 156) 00:34:16.045 2368.233 - 2383.127: 95.0310% ( 154) 00:34:16.045 2383.127 - 2398.022: 95.2088% ( 151) 00:34:16.045 2398.022 - 2412.916: 95.3984% ( 161) 00:34:16.045 2412.916 - 2427.811: 95.5856% ( 159) 00:34:16.045 2427.811 - 2442.705: 95.7646% ( 152) 00:34:16.045 2442.705 - 2457.600: 95.9412% ( 150) 00:34:16.045 2457.600 - 2472.495: 96.1178% ( 150) 00:34:16.045 2472.495 - 2487.389: 96.3027% ( 157) 00:34:16.045 2487.389 - 2502.284: 96.4829% ( 153) 00:34:16.045 2502.284 - 2517.178: 96.6665% ( 156) 00:34:16.045 2517.178 - 2532.073: 96.8420% ( 149) 00:34:16.045 2532.073 - 2546.967: 97.0174% ( 149) 00:34:16.045 2546.967 - 2561.862: 97.1835% ( 141) 00:34:16.045 2561.862 - 2576.756: 97.3436% ( 136) 00:34:16.045 2576.756 - 2591.651: 97.5120% ( 143) 00:34:16.045 2591.651 - 2606.545: 97.6780% ( 141) 00:34:16.045 2606.545 - 2621.440: 97.8393% ( 137) 00:34:16.045 2621.440 - 2636.335: 97.9865% ( 125) 00:34:16.045 2636.335 - 2651.229: 98.1302% ( 122) 00:34:16.045 2651.229 - 2666.124: 98.2726% ( 121) 00:34:16.045 2666.124 - 2681.018: 98.3986% ( 107) 00:34:16.045 2681.018 - 2695.913: 98.5117% ( 96) 00:34:16.045 2695.913 - 2710.807: 98.6259% ( 97) 00:34:16.045 2710.807 - 2725.702: 98.7177% ( 78) 00:34:16.045 2725.702 - 2740.596: 98.8001% ( 70) 00:34:16.045 2740.596 - 2755.491: 98.8731% ( 62) 00:34:16.045 2755.491 - 2770.385: 98.9403% ( 57) 00:34:16.045 2770.385 - 2785.280: 99.0003% ( 51) 00:34:16.045 2785.280 - 2800.175: 99.0474% ( 40) 00:34:16.045 2800.175 - 2815.069: 99.0886% ( 35) 00:34:16.045 2815.069 - 2829.964: 99.1192% ( 26) 00:34:16.045 2829.964 - 2844.858: 99.1557% ( 31) 00:34:16.045 2844.858 - 2859.753: 99.1852% ( 25) 00:34:16.045 2859.753 - 2874.647: 99.2170% ( 27) 00:34:16.045 2874.647 - 2889.542: 99.2335% ( 14) 00:34:16.045 2889.542 - 2904.436: 99.2547% ( 18) 00:34:16.045 2904.436 - 2919.331: 99.2711% ( 14) 00:34:16.045 2919.331 - 2934.225: 99.2876% ( 14) 00:34:16.045 2934.225 - 2949.120: 99.2994% ( 10) 00:34:16.045 2949.120 - 2964.015: 99.3112% ( 10) 00:34:16.045 2964.015 - 2978.909: 99.3229% ( 10) 00:34:16.045 2978.909 - 2993.804: 99.3324% ( 8) 00:34:16.045 2993.804 - 3008.698: 99.3477% ( 13) 00:34:16.045 3008.698 - 3023.593: 99.3559% ( 7) 00:34:16.045 3023.593 - 3038.487: 99.3653% ( 8) 00:34:16.045 3038.487 - 3053.382: 99.3771% ( 10) 00:34:16.045 3053.382 - 3068.276: 99.3842% ( 6) 00:34:16.045 3068.276 - 3083.171: 99.3936% ( 8) 00:34:16.045 3083.171 - 3098.065: 99.4018% ( 7) 00:34:16.045 3098.065 - 3112.960: 99.4160% ( 12) 00:34:16.045 3112.960 - 3127.855: 99.4301% ( 12) 00:34:16.045 3127.855 - 3142.749: 99.4431% ( 11) 00:34:16.045 3142.749 - 3157.644: 99.4525% ( 8) 00:34:16.045 3157.644 - 3172.538: 99.4654% ( 11) 00:34:16.045 3172.538 - 3187.433: 99.4760% ( 9) 00:34:16.045 3187.433 - 3202.327: 99.4878% ( 10) 00:34:16.045 3202.327 - 3217.222: 99.4996% ( 10) 00:34:16.045 3217.222 - 3232.116: 99.5113% ( 10) 00:34:16.045 3232.116 - 3247.011: 99.5231% ( 10) 00:34:16.045 3247.011 - 3261.905: 99.5325% ( 8) 00:34:16.045 3261.905 - 3276.800: 99.5443% ( 10) 00:34:16.045 3276.800 - 3291.695: 99.5584% ( 12) 00:34:16.045 3291.695 - 3306.589: 99.5667% ( 7) 00:34:16.045 3306.589 - 3321.484: 99.5785% ( 10) 00:34:16.045 3321.484 - 3336.378: 99.5867% ( 7) 00:34:16.045 3336.378 - 3351.273: 99.5973% ( 9) 00:34:16.045 3351.273 - 3366.167: 99.6044% ( 6) 00:34:16.045 3366.167 - 3381.062: 99.6138% ( 8) 00:34:16.045 3381.062 - 3395.956: 99.6220% ( 7) 00:34:16.045 3395.956 - 3410.851: 99.6291% ( 6) 00:34:16.045 3410.851 - 3425.745: 99.6350% ( 5) 00:34:16.045 3425.745 - 3440.640: 99.6432% ( 7) 00:34:16.045 3440.640 - 3455.535: 99.6479% ( 4) 00:34:16.045 3455.535 - 3470.429: 99.6550% ( 6) 00:34:16.045 3470.429 - 3485.324: 99.6632% ( 7) 00:34:16.045 3485.324 - 3500.218: 99.6715% ( 7) 00:34:16.045 3500.218 - 3515.113: 99.6785% ( 6) 00:34:16.045 3515.113 - 3530.007: 99.6844% ( 5) 00:34:16.045 3530.007 - 3544.902: 99.6903% ( 5) 00:34:16.045 3544.902 - 3559.796: 99.6962% ( 5) 00:34:16.045 3559.796 - 3574.691: 99.7021% ( 5) 00:34:16.045 3574.691 - 3589.585: 99.7056% ( 3) 00:34:16.045 3589.585 - 3604.480: 99.7115% ( 5) 00:34:16.045 3604.480 - 3619.375: 99.7186% ( 6) 00:34:16.045 3619.375 - 3634.269: 99.7233% ( 4) 00:34:16.045 3634.269 - 3649.164: 99.7304% ( 6) 00:34:16.045 3649.164 - 3664.058: 99.7339% ( 3) 00:34:16.045 3664.058 - 3678.953: 99.7410% ( 6) 00:34:16.045 3678.953 - 3693.847: 99.7457% ( 4) 00:34:16.045 3693.847 - 3708.742: 99.7516% ( 5) 00:34:16.045 3708.742 - 3723.636: 99.7563% ( 4) 00:34:16.045 3723.636 - 3738.531: 99.7621% ( 5) 00:34:16.045 3738.531 - 3753.425: 99.7657% ( 3) 00:34:16.045 3753.425 - 3768.320: 99.7716% ( 5) 00:34:16.045 3768.320 - 3783.215: 99.7763% ( 4) 00:34:16.045 3783.215 - 3798.109: 99.7822% ( 5) 00:34:16.045 3798.109 - 3813.004: 99.7881% ( 5) 00:34:16.045 3813.004 - 3842.793: 99.7975% ( 8) 00:34:16.045 3842.793 - 3872.582: 99.8069% ( 8) 00:34:16.045 3872.582 - 3902.371: 99.8163% ( 8) 00:34:16.045 3902.371 - 3932.160: 99.8257% ( 8) 00:34:16.045 3932.160 - 3961.949: 99.8340% ( 7) 00:34:16.045 3961.949 - 3991.738: 99.8399% ( 5) 00:34:16.045 3991.738 - 4021.527: 99.8446% ( 4) 00:34:16.045 4021.527 - 4051.316: 99.8505% ( 5) 00:34:16.045 4051.316 - 4081.105: 99.8563% ( 5) 00:34:16.045 4081.105 - 4110.895: 99.8622% ( 5) 00:34:16.045 4110.895 - 4140.684: 99.8693% ( 6) 00:34:16.045 4140.684 - 4170.473: 99.8740% ( 4) 00:34:16.045 4170.473 - 4200.262: 99.8775% ( 3) 00:34:16.045 4200.262 - 4230.051: 99.8811% ( 3) 00:34:16.045 4230.051 - 4259.840: 99.8846% ( 3) 00:34:16.045 4259.840 - 4289.629: 99.8881% ( 3) 00:34:16.046 4289.629 - 4319.418: 99.8917% ( 3) 00:34:16.046 4319.418 - 4349.207: 99.8940% ( 2) 00:34:16.046 4349.207 - 4378.996: 99.8976% ( 3) 00:34:16.046 4378.996 - 4408.785: 99.8999% ( 2) 00:34:16.046 4408.785 - 4438.575: 99.9034% ( 3) 00:34:16.046 4438.575 - 4468.364: 99.9058% ( 2) 00:34:16.046 4468.364 - 4498.153: 99.9093% ( 3) 00:34:16.046 4498.153 - 4527.942: 99.9117% ( 2) 00:34:16.046 4527.942 - 4557.731: 99.9129% ( 1) 00:34:16.046 4557.731 - 4587.520: 99.9140% ( 1) 00:34:16.046 4587.520 - 4617.309: 99.9152% ( 1) 00:34:16.046 4617.309 - 4647.098: 99.9164% ( 1) 00:34:16.046 4676.887 - 4706.676: 99.9176% ( 1) 00:34:16.046 4706.676 - 4736.465: 99.9188% ( 1) 00:34:16.046 4736.465 - 4766.255: 99.9199% ( 1) 00:34:16.046 4766.255 - 4796.044: 99.9211% ( 1) 00:34:16.046 4796.044 - 4825.833: 99.9223% ( 1) 00:34:16.046 4825.833 - 4855.622: 99.9235% ( 1) 00:34:16.046 4855.622 - 4885.411: 99.9246% ( 1) 00:34:16.046 4915.200 - 4944.989: 99.9258% ( 1) 00:34:16.046 4944.989 - 4974.778: 99.9270% ( 1) 00:34:16.046 4974.778 - 5004.567: 99.9282% ( 1) 00:34:16.046 5004.567 - 5034.356: 99.9294% ( 1) 00:34:16.046 5034.356 - 5064.145: 99.9305% ( 1) 00:34:16.046 5064.145 - 5093.935: 99.9317% ( 1) 00:34:16.046 5093.935 - 5123.724: 99.9329% ( 1) 00:34:16.046 5123.724 - 5153.513: 99.9341% ( 1) 00:34:16.046 5153.513 - 5183.302: 99.9352% ( 1) 00:34:16.046 5183.302 - 5213.091: 99.9364% ( 1) 00:34:16.046 5213.091 - 5242.880: 99.9376% ( 1) 00:34:16.046 5242.880 - 5272.669: 99.9388% ( 1) 00:34:16.046 5302.458 - 5332.247: 99.9399% ( 1) 00:34:16.046 5332.247 - 5362.036: 99.9411% ( 1) 00:34:16.046 5362.036 - 5391.825: 99.9423% ( 1) 00:34:16.046 5391.825 - 5421.615: 99.9435% ( 1) 00:34:16.046 5421.615 - 5451.404: 99.9447% ( 1) 00:34:16.046 5451.404 - 5481.193: 99.9458% ( 1) 00:34:16.046 5510.982 - 5540.771: 99.9482% ( 2) 00:34:16.046 5570.560 - 5600.349: 99.9494% ( 1) 00:34:16.046 5600.349 - 5630.138: 99.9517% ( 2) 00:34:16.046 5659.927 - 5689.716: 99.9529% ( 1) 00:34:16.046 5689.716 - 5719.505: 99.9541% ( 1) 00:34:16.046 5719.505 - 5749.295: 99.9553% ( 1) 00:34:16.046 5749.295 - 5779.084: 99.9564% ( 1) 00:34:16.046 5779.084 - 5808.873: 99.9576% ( 1) 00:34:16.046 5808.873 - 5838.662: 99.9588% ( 1) 00:34:16.046 5838.662 - 5868.451: 99.9600% ( 1) 00:34:16.046 5868.451 - 5898.240: 99.9611% ( 1) 00:34:16.046 5898.240 - 5928.029: 99.9623% ( 1) 00:34:16.046 5928.029 - 5957.818: 99.9635% ( 1) 00:34:16.046 5957.818 - 5987.607: 99.9647% ( 1) 00:34:16.046 5987.607 - 6017.396: 99.9659% ( 1) 00:34:16.046 6017.396 - 6047.185: 99.9670% ( 1) 00:34:16.046 6047.185 - 6076.975: 99.9682% ( 1) 00:34:16.046 6076.975 - 6106.764: 99.9694% ( 1) 00:34:16.046 6136.553 - 6166.342: 99.9706% ( 1) 00:34:16.046 6166.342 - 6196.131: 99.9717% ( 1) 00:34:16.046 6196.131 - 6225.920: 99.9729% ( 1) 00:34:16.046 6225.920 - 6255.709: 99.9741% ( 1) 00:34:16.046 6255.709 - 6285.498: 99.9753% ( 1) 00:34:16.046 6285.498 - 6315.287: 99.9765% ( 1) 00:34:16.046 6315.287 - 6345.076: 99.9776% ( 1) 00:34:16.046 6345.076 - 6374.865: 99.9788% ( 1) 00:34:16.046 6374.865 - 6404.655: 99.9800% ( 1) 00:34:16.046 6404.655 - 6434.444: 99.9812% ( 1) 00:34:16.046 6434.444 - 6464.233: 99.9823% ( 1) 00:34:16.046 6494.022 - 6523.811: 99.9835% ( 1) 00:34:16.046 6523.811 - 6553.600: 99.9847% ( 1) 00:34:16.046 6553.600 - 6583.389: 99.9859% ( 1) 00:34:16.046 6583.389 - 6613.178: 99.9870% ( 1) 00:34:16.046 6613.178 - 6642.967: 99.9882% ( 1) 00:34:16.046 6642.967 - 6672.756: 99.9894% ( 1) 00:34:16.046 6672.756 - 6702.545: 99.9906% ( 1) 00:34:16.046 6702.545 - 6732.335: 99.9918% ( 1) 00:34:16.046 6732.335 - 6762.124: 99.9929% ( 1) 00:34:16.046 6762.124 - 6791.913: 99.9941% ( 1) 00:34:16.046 6791.913 - 6821.702: 99.9953% ( 1) 00:34:16.046 6821.702 - 6851.491: 99.9965% ( 1) 00:34:16.046 6851.491 - 6881.280: 99.9976% ( 1) 00:34:16.046 6911.069 - 6940.858: 99.9988% ( 1) 00:34:16.046 6970.647 - 7000.436: 100.0000% ( 1) 00:34:16.046 00:34:16.046 13:18:20 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:34:17.456 Initializing NVMe Controllers 00:34:17.456 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:17.456 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:34:17.456 Initialization complete. Launching workers. 00:34:17.456 ======================================================== 00:34:17.456 Latency(us) 00:34:17.456 Device Information : IOPS MiB/s Average min max 00:34:17.456 PCIE (0000:00:10.0) NSID 1 from core 0: 67370.16 789.49 1899.03 630.19 10145.09 00:34:17.456 ======================================================== 00:34:17.456 Total : 67370.16 789.49 1899.03 630.19 10145.09 00:34:17.456 00:34:17.456 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:34:17.456 ================================================================================= 00:34:17.456 1.00000% : 1050.065us 00:34:17.456 10.00000% : 1288.378us 00:34:17.456 25.00000% : 1452.218us 00:34:17.456 50.00000% : 1683.084us 00:34:17.456 75.00000% : 2070.342us 00:34:17.456 90.00000% : 3038.487us 00:34:17.456 95.00000% : 3678.953us 00:34:17.456 98.00000% : 3842.793us 00:34:17.456 99.00000% : 4230.051us 00:34:17.456 99.50000% : 4468.364us 00:34:17.456 99.90000% : 5362.036us 00:34:17.456 99.99000% : 10009.135us 00:34:17.456 99.99900% : 10187.869us 00:34:17.456 99.99990% : 10187.869us 00:34:17.456 99.99999% : 10187.869us 00:34:17.456 00:34:17.456 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:34:17.456 ============================================================================== 00:34:17.456 Range in us Cumulative IO count 00:34:17.456 629.295 - 633.018: 0.0015% ( 1) 00:34:17.456 681.425 - 685.149: 0.0030% ( 1) 00:34:17.456 685.149 - 688.873: 0.0045% ( 1) 00:34:17.456 688.873 - 692.596: 0.0059% ( 1) 00:34:17.456 714.938 - 718.662: 0.0074% ( 1) 00:34:17.456 722.385 - 726.109: 0.0119% ( 3) 00:34:17.456 726.109 - 729.833: 0.0134% ( 1) 00:34:17.456 729.833 - 733.556: 0.0148% ( 1) 00:34:17.456 737.280 - 741.004: 0.0163% ( 1) 00:34:17.456 741.004 - 744.727: 0.0178% ( 1) 00:34:17.456 748.451 - 752.175: 0.0223% ( 3) 00:34:17.456 752.175 - 755.898: 0.0252% ( 2) 00:34:17.456 755.898 - 759.622: 0.0267% ( 1) 00:34:17.456 763.345 - 767.069: 0.0282% ( 1) 00:34:17.456 770.793 - 774.516: 0.0297% ( 1) 00:34:17.456 774.516 - 778.240: 0.0312% ( 1) 00:34:17.456 778.240 - 781.964: 0.0327% ( 1) 00:34:17.456 781.964 - 785.687: 0.0341% ( 1) 00:34:17.456 785.687 - 789.411: 0.0371% ( 2) 00:34:17.456 789.411 - 793.135: 0.0386% ( 1) 00:34:17.456 796.858 - 800.582: 0.0401% ( 1) 00:34:17.456 808.029 - 811.753: 0.0416% ( 1) 00:34:17.456 811.753 - 815.476: 0.0430% ( 1) 00:34:17.456 815.476 - 819.200: 0.0460% ( 2) 00:34:17.456 826.647 - 830.371: 0.0475% ( 1) 00:34:17.456 834.095 - 837.818: 0.0564% ( 6) 00:34:17.456 837.818 - 841.542: 0.0594% ( 2) 00:34:17.456 841.542 - 845.265: 0.0608% ( 1) 00:34:17.456 845.265 - 848.989: 0.0638% ( 2) 00:34:17.456 848.989 - 852.713: 0.0712% ( 5) 00:34:17.456 852.713 - 856.436: 0.0742% ( 2) 00:34:17.456 860.160 - 863.884: 0.0816% ( 5) 00:34:17.456 863.884 - 867.607: 0.0876% ( 4) 00:34:17.456 871.331 - 875.055: 0.0935% ( 4) 00:34:17.456 875.055 - 878.778: 0.0950% ( 1) 00:34:17.456 878.778 - 882.502: 0.0994% ( 3) 00:34:17.456 882.502 - 886.225: 0.1039% ( 3) 00:34:17.456 886.225 - 889.949: 0.1098% ( 4) 00:34:17.456 889.949 - 893.673: 0.1172% ( 5) 00:34:17.456 893.673 - 897.396: 0.1262% ( 6) 00:34:17.456 897.396 - 901.120: 0.1336% ( 5) 00:34:17.456 901.120 - 904.844: 0.1365% ( 2) 00:34:17.456 904.844 - 908.567: 0.1410% ( 3) 00:34:17.456 908.567 - 912.291: 0.1529% ( 8) 00:34:17.456 912.291 - 916.015: 0.1618% ( 6) 00:34:17.456 916.015 - 919.738: 0.1707% ( 6) 00:34:17.456 919.738 - 923.462: 0.1722% ( 1) 00:34:17.456 923.462 - 927.185: 0.1825% ( 7) 00:34:17.456 927.185 - 930.909: 0.1900% ( 5) 00:34:17.456 930.909 - 934.633: 0.2033% ( 9) 00:34:17.456 934.633 - 938.356: 0.2122% ( 6) 00:34:17.456 938.356 - 942.080: 0.2196% ( 5) 00:34:17.456 942.080 - 945.804: 0.2286% ( 6) 00:34:17.456 945.804 - 949.527: 0.2449% ( 11) 00:34:17.456 949.527 - 953.251: 0.2627% ( 12) 00:34:17.456 953.251 - 960.698: 0.2805% ( 12) 00:34:17.456 960.698 - 968.145: 0.3102% ( 20) 00:34:17.456 968.145 - 975.593: 0.3369% ( 18) 00:34:17.456 975.593 - 983.040: 0.3859% ( 33) 00:34:17.456 983.040 - 990.487: 0.4185% ( 22) 00:34:17.456 990.487 - 997.935: 0.4675% ( 33) 00:34:17.456 997.935 - 1005.382: 0.5091% ( 28) 00:34:17.456 1005.382 - 1012.829: 0.5729% ( 43) 00:34:17.456 1012.829 - 1020.276: 0.6322% ( 40) 00:34:17.456 1020.276 - 1027.724: 0.7257% ( 63) 00:34:17.456 1027.724 - 1035.171: 0.7940% ( 46) 00:34:17.456 1035.171 - 1042.618: 0.9009% ( 72) 00:34:17.456 1042.618 - 1050.065: 1.0077% ( 72) 00:34:17.456 1050.065 - 1057.513: 1.1235% ( 78) 00:34:17.456 1057.513 - 1064.960: 1.2496% ( 85) 00:34:17.456 1064.960 - 1072.407: 1.3802% ( 88) 00:34:17.456 1072.407 - 1079.855: 1.5079% ( 86) 00:34:17.456 1079.855 - 1087.302: 1.6771% ( 114) 00:34:17.456 1087.302 - 1094.749: 1.8359% ( 107) 00:34:17.456 1094.749 - 1102.196: 2.0095% ( 117) 00:34:17.456 1102.196 - 1109.644: 2.1831% ( 117) 00:34:17.456 1109.644 - 1117.091: 2.3746% ( 129) 00:34:17.456 1117.091 - 1124.538: 2.5660% ( 129) 00:34:17.456 1124.538 - 1131.985: 2.7456% ( 121) 00:34:17.456 1131.985 - 1139.433: 2.9489% ( 137) 00:34:17.456 1139.433 - 1146.880: 3.1760% ( 153) 00:34:17.456 1146.880 - 1154.327: 3.3972% ( 149) 00:34:17.456 1154.327 - 1161.775: 3.6376% ( 162) 00:34:17.456 1161.775 - 1169.222: 3.9047% ( 180) 00:34:17.456 1169.222 - 1176.669: 4.1971% ( 197) 00:34:17.456 1176.669 - 1184.116: 4.5073% ( 209) 00:34:17.456 1184.116 - 1191.564: 4.8293% ( 217) 00:34:17.456 1191.564 - 1199.011: 5.1395% ( 209) 00:34:17.456 1199.011 - 1206.458: 5.4720% ( 224) 00:34:17.456 1206.458 - 1213.905: 5.8281% ( 240) 00:34:17.456 1213.905 - 1221.353: 6.2422% ( 279) 00:34:17.456 1221.353 - 1228.800: 6.6147% ( 251) 00:34:17.456 1228.800 - 1236.247: 7.0377% ( 285) 00:34:17.456 1236.247 - 1243.695: 7.4354% ( 268) 00:34:17.456 1243.695 - 1251.142: 7.8822% ( 301) 00:34:17.456 1251.142 - 1258.589: 8.3081% ( 287) 00:34:17.456 1258.589 - 1266.036: 8.7979% ( 330) 00:34:17.456 1266.036 - 1273.484: 9.2565% ( 309) 00:34:17.456 1273.484 - 1280.931: 9.7447% ( 329) 00:34:17.456 1280.931 - 1288.378: 10.2939% ( 370) 00:34:17.456 1288.378 - 1295.825: 10.8638% ( 384) 00:34:17.456 1295.825 - 1303.273: 11.4544% ( 398) 00:34:17.456 1303.273 - 1310.720: 12.0496% ( 401) 00:34:17.456 1310.720 - 1318.167: 12.7085% ( 444) 00:34:17.456 1318.167 - 1325.615: 13.2784% ( 384) 00:34:17.456 1325.615 - 1333.062: 13.9018% ( 420) 00:34:17.456 1333.062 - 1340.509: 14.5444% ( 433) 00:34:17.456 1340.509 - 1347.956: 15.1662% ( 419) 00:34:17.456 1347.956 - 1355.404: 15.8267% ( 445) 00:34:17.456 1355.404 - 1362.851: 16.4826% ( 442) 00:34:17.456 1362.851 - 1370.298: 17.1772% ( 468) 00:34:17.456 1370.298 - 1377.745: 17.8569% ( 458) 00:34:17.456 1377.745 - 1385.193: 18.5233% ( 449) 00:34:17.456 1385.193 - 1392.640: 19.3010% ( 524) 00:34:17.456 1392.640 - 1400.087: 20.0401% ( 498) 00:34:17.456 1400.087 - 1407.535: 20.8831% ( 568) 00:34:17.456 1407.535 - 1414.982: 21.6014% ( 484) 00:34:17.456 1414.982 - 1422.429: 22.3924% ( 533) 00:34:17.456 1422.429 - 1429.876: 23.1315% ( 498) 00:34:17.456 1429.876 - 1437.324: 23.8899% ( 511) 00:34:17.456 1437.324 - 1444.771: 24.6750% ( 529) 00:34:17.456 1444.771 - 1452.218: 25.4764% ( 540) 00:34:17.456 1452.218 - 1459.665: 26.3209% ( 569) 00:34:17.456 1459.665 - 1467.113: 27.1416% ( 553) 00:34:17.456 1467.113 - 1474.560: 27.9400% ( 538) 00:34:17.456 1474.560 - 1482.007: 28.7444% ( 542) 00:34:17.456 1482.007 - 1489.455: 29.5444% ( 539) 00:34:17.456 1489.455 - 1496.902: 30.3606% ( 550) 00:34:17.456 1496.902 - 1504.349: 31.1576% ( 537) 00:34:17.456 1504.349 - 1511.796: 31.9768% ( 552) 00:34:17.456 1511.796 - 1519.244: 32.7709% ( 535) 00:34:17.456 1519.244 - 1526.691: 33.6836% ( 615) 00:34:17.456 1526.691 - 1534.138: 34.5696% ( 597) 00:34:17.456 1534.138 - 1541.585: 35.4066% ( 564) 00:34:17.456 1541.585 - 1549.033: 36.2823% ( 590) 00:34:17.456 1549.033 - 1556.480: 37.1876% ( 610) 00:34:17.456 1556.480 - 1563.927: 38.0751% ( 598) 00:34:17.456 1563.927 - 1571.375: 38.9730% ( 605) 00:34:17.456 1571.375 - 1578.822: 39.8590% ( 597) 00:34:17.456 1578.822 - 1586.269: 40.6352% ( 523) 00:34:17.456 1586.269 - 1593.716: 41.3698% ( 495) 00:34:17.456 1593.716 - 1601.164: 42.2544% ( 596) 00:34:17.456 1601.164 - 1608.611: 43.0469% ( 534) 00:34:17.456 1608.611 - 1616.058: 43.8513% ( 542) 00:34:17.456 1616.058 - 1623.505: 44.6572% ( 543) 00:34:17.456 1623.505 - 1630.953: 45.3666% ( 478) 00:34:17.456 1630.953 - 1638.400: 46.1190% ( 507) 00:34:17.456 1638.400 - 1645.847: 46.9561% ( 564) 00:34:17.456 1645.847 - 1653.295: 47.7204% ( 515) 00:34:17.456 1653.295 - 1660.742: 48.4090% ( 464) 00:34:17.456 1660.742 - 1668.189: 49.1763% ( 517) 00:34:17.456 1668.189 - 1675.636: 49.8976% ( 486) 00:34:17.457 1675.636 - 1683.084: 50.7168% ( 552) 00:34:17.457 1683.084 - 1690.531: 51.3877% ( 452) 00:34:17.457 1690.531 - 1697.978: 52.0303% ( 433) 00:34:17.457 1697.978 - 1705.425: 52.7011% ( 452) 00:34:17.457 1705.425 - 1712.873: 53.3215% ( 418) 00:34:17.457 1712.873 - 1720.320: 54.0071% ( 462) 00:34:17.457 1720.320 - 1727.767: 54.6052% ( 403) 00:34:17.457 1727.767 - 1735.215: 55.2360% ( 425) 00:34:17.457 1735.215 - 1742.662: 55.8504% ( 414) 00:34:17.457 1742.662 - 1750.109: 56.4559% ( 408) 00:34:17.457 1750.109 - 1757.556: 57.0303% ( 387) 00:34:17.457 1757.556 - 1765.004: 57.5987% ( 383) 00:34:17.457 1765.004 - 1772.451: 58.1345% ( 361) 00:34:17.457 1772.451 - 1779.898: 58.7207% ( 395) 00:34:17.457 1779.898 - 1787.345: 59.2802% ( 377) 00:34:17.457 1787.345 - 1794.793: 59.9020% ( 419) 00:34:17.457 1794.793 - 1802.240: 60.4764% ( 387) 00:34:17.457 1802.240 - 1809.687: 60.9558% ( 323) 00:34:17.457 1809.687 - 1817.135: 61.4663% ( 344) 00:34:17.457 1817.135 - 1824.582: 61.9412% ( 320) 00:34:17.457 1824.582 - 1832.029: 62.4933% ( 372) 00:34:17.457 1832.029 - 1839.476: 63.0632% ( 384) 00:34:17.457 1839.476 - 1846.924: 63.5723% ( 343) 00:34:17.457 1846.924 - 1854.371: 64.1362% ( 380) 00:34:17.457 1854.371 - 1861.818: 64.6572% ( 351) 00:34:17.457 1861.818 - 1869.265: 65.1514% ( 333) 00:34:17.457 1869.265 - 1876.713: 65.6500% ( 336) 00:34:17.457 1876.713 - 1884.160: 66.0804% ( 290) 00:34:17.457 1884.160 - 1891.607: 66.5227% ( 298) 00:34:17.457 1891.607 - 1899.055: 66.9754% ( 305) 00:34:17.457 1899.055 - 1906.502: 67.4102% ( 293) 00:34:17.457 1906.502 - 1921.396: 68.2532% ( 568) 00:34:17.457 1921.396 - 1936.291: 69.0724% ( 552) 00:34:17.457 1936.291 - 1951.185: 69.8590% ( 530) 00:34:17.457 1951.185 - 1966.080: 70.5922% ( 494) 00:34:17.457 1966.080 - 1980.975: 71.3431% ( 506) 00:34:17.457 1980.975 - 1995.869: 72.0882% ( 502) 00:34:17.457 1995.869 - 2010.764: 72.7530% ( 448) 00:34:17.457 2010.764 - 2025.658: 73.4209% ( 450) 00:34:17.457 2025.658 - 2040.553: 74.0798% ( 444) 00:34:17.457 2040.553 - 2055.447: 74.7061% ( 422) 00:34:17.457 2055.447 - 2070.342: 75.3057% ( 404) 00:34:17.457 2070.342 - 2085.236: 75.8920% ( 395) 00:34:17.457 2085.236 - 2100.131: 76.4159% ( 353) 00:34:17.457 2100.131 - 2115.025: 77.0006% ( 394) 00:34:17.457 2115.025 - 2129.920: 77.5675% ( 382) 00:34:17.457 2129.920 - 2144.815: 78.0810% ( 346) 00:34:17.457 2144.815 - 2159.709: 78.5752% ( 333) 00:34:17.457 2159.709 - 2174.604: 79.0813% ( 341) 00:34:17.457 2174.604 - 2189.498: 79.5904% ( 343) 00:34:17.457 2189.498 - 2204.393: 80.0505% ( 310) 00:34:17.457 2204.393 - 2219.287: 80.4987% ( 302) 00:34:17.457 2219.287 - 2234.182: 80.9409% ( 298) 00:34:17.457 2234.182 - 2249.076: 81.3431% ( 271) 00:34:17.457 2249.076 - 2263.971: 81.7201% ( 254) 00:34:17.457 2263.971 - 2278.865: 82.0971% ( 254) 00:34:17.457 2278.865 - 2293.760: 82.4072% ( 209) 00:34:17.457 2293.760 - 2308.655: 82.7649% ( 241) 00:34:17.457 2308.655 - 2323.549: 83.0706% ( 206) 00:34:17.457 2323.549 - 2338.444: 83.3571% ( 193) 00:34:17.457 2338.444 - 2353.338: 83.6480% ( 196) 00:34:17.457 2353.338 - 2368.233: 83.9433% ( 199) 00:34:17.457 2368.233 - 2383.127: 84.2208% ( 187) 00:34:17.457 2383.127 - 2398.022: 84.4939% ( 184) 00:34:17.457 2398.022 - 2412.916: 84.7240% ( 155) 00:34:17.457 2412.916 - 2427.811: 84.9599% ( 159) 00:34:17.457 2427.811 - 2442.705: 85.1722% ( 143) 00:34:17.457 2442.705 - 2457.600: 85.3963% ( 151) 00:34:17.457 2457.600 - 2472.495: 85.5936% ( 133) 00:34:17.457 2472.495 - 2487.389: 85.7703% ( 119) 00:34:17.457 2487.389 - 2502.284: 85.9335% ( 110) 00:34:17.457 2502.284 - 2517.178: 86.1072% ( 117) 00:34:17.457 2517.178 - 2532.073: 86.2689% ( 109) 00:34:17.457 2532.073 - 2546.967: 86.4277% ( 107) 00:34:17.457 2546.967 - 2561.862: 86.5865% ( 107) 00:34:17.457 2561.862 - 2576.756: 86.7290% ( 96) 00:34:17.457 2576.756 - 2591.651: 86.8789% ( 101) 00:34:17.457 2591.651 - 2606.545: 87.0258% ( 99) 00:34:17.457 2606.545 - 2621.440: 87.1742% ( 100) 00:34:17.457 2621.440 - 2636.335: 87.3241% ( 101) 00:34:17.457 2636.335 - 2651.229: 87.4488% ( 84) 00:34:17.457 2651.229 - 2666.124: 87.5972% ( 100) 00:34:17.457 2666.124 - 2681.018: 87.7115% ( 77) 00:34:17.457 2681.018 - 2695.913: 87.8198% ( 73) 00:34:17.457 2695.913 - 2710.807: 87.9341% ( 77) 00:34:17.457 2710.807 - 2725.702: 88.0410% ( 72) 00:34:17.457 2725.702 - 2740.596: 88.1567% ( 78) 00:34:17.457 2740.596 - 2755.491: 88.2740% ( 79) 00:34:17.457 2755.491 - 2770.385: 88.3897% ( 78) 00:34:17.457 2770.385 - 2785.280: 88.4892% ( 67) 00:34:17.457 2785.280 - 2800.175: 88.5990% ( 74) 00:34:17.457 2800.175 - 2815.069: 88.6925% ( 63) 00:34:17.457 2815.069 - 2829.964: 88.7845% ( 62) 00:34:17.457 2829.964 - 2844.858: 88.8914% ( 72) 00:34:17.457 2844.858 - 2859.753: 88.9849% ( 63) 00:34:17.457 2859.753 - 2874.647: 89.0724% ( 59) 00:34:17.457 2874.647 - 2889.542: 89.1570% ( 57) 00:34:17.457 2889.542 - 2904.436: 89.2535% ( 65) 00:34:17.457 2904.436 - 2919.331: 89.3485% ( 64) 00:34:17.457 2919.331 - 2934.225: 89.4316% ( 56) 00:34:17.457 2934.225 - 2949.120: 89.5132% ( 55) 00:34:17.457 2949.120 - 2964.015: 89.6112% ( 66) 00:34:17.457 2964.015 - 2978.909: 89.7032% ( 62) 00:34:17.457 2978.909 - 2993.804: 89.8026% ( 67) 00:34:17.457 2993.804 - 3008.698: 89.8931% ( 61) 00:34:17.457 3008.698 - 3023.593: 89.9911% ( 66) 00:34:17.457 3023.593 - 3038.487: 90.0831% ( 62) 00:34:17.457 3038.487 - 3053.382: 90.1751% ( 62) 00:34:17.457 3053.382 - 3068.276: 90.2805% ( 71) 00:34:17.457 3068.276 - 3083.171: 90.3621% ( 55) 00:34:17.457 3083.171 - 3098.065: 90.4734% ( 75) 00:34:17.457 3098.065 - 3112.960: 90.5684% ( 64) 00:34:17.457 3112.960 - 3127.855: 90.6545% ( 58) 00:34:17.457 3127.855 - 3142.749: 90.7495% ( 64) 00:34:17.457 3142.749 - 3157.644: 90.8549% ( 71) 00:34:17.457 3157.644 - 3172.538: 90.9587% ( 70) 00:34:17.457 3172.538 - 3187.433: 91.0389% ( 54) 00:34:17.457 3187.433 - 3202.327: 91.1175% ( 53) 00:34:17.457 3202.327 - 3217.222: 91.2007% ( 56) 00:34:17.457 3217.222 - 3232.116: 91.3105% ( 74) 00:34:17.457 3232.116 - 3247.011: 91.4025% ( 62) 00:34:17.457 3247.011 - 3261.905: 91.4901% ( 59) 00:34:17.457 3261.905 - 3276.800: 91.5776% ( 59) 00:34:17.457 3276.800 - 3291.695: 91.6771% ( 67) 00:34:17.457 3291.695 - 3306.589: 91.7631% ( 58) 00:34:17.457 3306.589 - 3321.484: 91.8359% ( 49) 00:34:17.457 3321.484 - 3336.378: 91.9145% ( 53) 00:34:17.457 3336.378 - 3351.273: 92.0050% ( 61) 00:34:17.457 3351.273 - 3366.167: 92.1015% ( 65) 00:34:17.457 3366.167 - 3381.062: 92.1787% ( 52) 00:34:17.457 3381.062 - 3395.956: 92.2663% ( 59) 00:34:17.457 3395.956 - 3410.851: 92.3568% ( 61) 00:34:17.457 3410.851 - 3425.745: 92.4399% ( 56) 00:34:17.457 3425.745 - 3440.640: 92.5200% ( 54) 00:34:17.457 3440.640 - 3455.535: 92.6017% ( 55) 00:34:17.457 3455.535 - 3470.429: 92.6818% ( 54) 00:34:17.457 3470.429 - 3485.324: 92.7530% ( 48) 00:34:17.457 3485.324 - 3500.218: 92.8287% ( 51) 00:34:17.457 3500.218 - 3515.113: 92.9015% ( 49) 00:34:17.457 3515.113 - 3530.007: 92.9846% ( 56) 00:34:17.457 3530.007 - 3544.902: 93.0543% ( 47) 00:34:17.457 3544.902 - 3559.796: 93.1226% ( 46) 00:34:17.457 3559.796 - 3574.691: 93.1909% ( 46) 00:34:17.457 3574.691 - 3589.585: 93.2814% ( 61) 00:34:17.457 3589.585 - 3604.480: 93.3853% ( 70) 00:34:17.457 3604.480 - 3619.375: 93.5159% ( 88) 00:34:17.457 3619.375 - 3634.269: 93.7355% ( 148) 00:34:17.457 3634.269 - 3649.164: 93.9626% ( 153) 00:34:17.457 3649.164 - 3664.058: 94.4880% ( 354) 00:34:17.457 3664.058 - 3678.953: 95.0430% ( 374) 00:34:17.457 3678.953 - 3693.847: 95.7406% ( 470) 00:34:17.457 3693.847 - 3708.742: 96.2927% ( 372) 00:34:17.457 3708.742 - 3723.636: 96.8715% ( 390) 00:34:17.457 3723.636 - 3738.531: 97.2588% ( 261) 00:34:17.457 3738.531 - 3753.425: 97.5438% ( 192) 00:34:17.457 3753.425 - 3768.320: 97.7323% ( 127) 00:34:17.457 3768.320 - 3783.215: 97.8139% ( 55) 00:34:17.457 3783.215 - 3798.109: 97.9163% ( 69) 00:34:17.457 3798.109 - 3813.004: 97.9786% ( 42) 00:34:17.457 3813.004 - 3842.793: 98.0706% ( 62) 00:34:17.457 3842.793 - 3872.582: 98.1730% ( 69) 00:34:17.457 3872.582 - 3902.371: 98.2651% ( 62) 00:34:17.457 3902.371 - 3932.160: 98.3467% ( 55) 00:34:17.457 3932.160 - 3961.949: 98.4313% ( 57) 00:34:17.457 3961.949 - 3991.738: 98.5099% ( 53) 00:34:17.457 3991.738 - 4021.527: 98.5767% ( 45) 00:34:17.457 4021.527 - 4051.316: 98.6480% ( 48) 00:34:17.457 4051.316 - 4081.105: 98.7296% ( 55) 00:34:17.457 4081.105 - 4110.895: 98.8008% ( 48) 00:34:17.457 4110.895 - 4140.684: 98.8632% ( 42) 00:34:17.457 4140.684 - 4170.473: 98.9240% ( 41) 00:34:17.457 4170.473 - 4200.262: 98.9908% ( 45) 00:34:17.457 4200.262 - 4230.051: 99.0502% ( 40) 00:34:17.457 4230.051 - 4259.840: 99.1229% ( 49) 00:34:17.457 4259.840 - 4289.629: 99.1837% ( 41) 00:34:17.457 4289.629 - 4319.418: 99.2416% ( 39) 00:34:17.457 4319.418 - 4349.207: 99.3069% ( 44) 00:34:17.457 4349.207 - 4378.996: 99.3678% ( 41) 00:34:17.457 4378.996 - 4408.785: 99.4153% ( 32) 00:34:17.457 4408.785 - 4438.575: 99.4717% ( 38) 00:34:17.457 4438.575 - 4468.364: 99.5310% ( 40) 00:34:17.457 4468.364 - 4498.153: 99.5800% ( 33) 00:34:17.457 4498.153 - 4527.942: 99.6230% ( 29) 00:34:17.457 4527.942 - 4557.731: 99.6616% ( 26) 00:34:17.457 4557.731 - 4587.520: 99.6987% ( 25) 00:34:17.457 4587.520 - 4617.309: 99.7314% ( 22) 00:34:17.457 4617.309 - 4647.098: 99.7625% ( 21) 00:34:17.457 4647.098 - 4676.887: 99.7789% ( 11) 00:34:17.458 4676.887 - 4706.676: 99.7937% ( 10) 00:34:17.458 4706.676 - 4736.465: 99.8115% ( 12) 00:34:17.458 4736.465 - 4766.255: 99.8189% ( 5) 00:34:17.458 4766.255 - 4796.044: 99.8249% ( 4) 00:34:17.458 4796.044 - 4825.833: 99.8323% ( 5) 00:34:17.458 4825.833 - 4855.622: 99.8412% ( 6) 00:34:17.458 4855.622 - 4885.411: 99.8427% ( 1) 00:34:17.458 4885.411 - 4915.200: 99.8442% ( 1) 00:34:17.458 4915.200 - 4944.989: 99.8501% ( 4) 00:34:17.458 4944.989 - 4974.778: 99.8546% ( 3) 00:34:17.458 4974.778 - 5004.567: 99.8575% ( 2) 00:34:17.458 5004.567 - 5034.356: 99.8635% ( 4) 00:34:17.458 5034.356 - 5064.145: 99.8724% ( 6) 00:34:17.458 5064.145 - 5093.935: 99.8768% ( 3) 00:34:17.458 5093.935 - 5123.724: 99.8783% ( 1) 00:34:17.458 5123.724 - 5153.513: 99.8828% ( 3) 00:34:17.458 5153.513 - 5183.302: 99.8842% ( 1) 00:34:17.458 5183.302 - 5213.091: 99.8857% ( 1) 00:34:17.458 5213.091 - 5242.880: 99.8872% ( 1) 00:34:17.458 5242.880 - 5272.669: 99.8917% ( 3) 00:34:17.458 5272.669 - 5302.458: 99.8961% ( 3) 00:34:17.458 5302.458 - 5332.247: 99.8991% ( 2) 00:34:17.458 5332.247 - 5362.036: 99.9006% ( 1) 00:34:17.458 5362.036 - 5391.825: 99.9020% ( 1) 00:34:17.458 5391.825 - 5421.615: 99.9050% ( 2) 00:34:17.458 5421.615 - 5451.404: 99.9065% ( 1) 00:34:17.458 5451.404 - 5481.193: 99.9080% ( 1) 00:34:17.458 5481.193 - 5510.982: 99.9095% ( 1) 00:34:17.458 5510.982 - 5540.771: 99.9110% ( 1) 00:34:17.458 5540.771 - 5570.560: 99.9139% ( 2) 00:34:17.458 5570.560 - 5600.349: 99.9154% ( 1) 00:34:17.458 5600.349 - 5630.138: 99.9169% ( 1) 00:34:17.458 5630.138 - 5659.927: 99.9184% ( 1) 00:34:17.458 5659.927 - 5689.716: 99.9213% ( 2) 00:34:17.458 5689.716 - 5719.505: 99.9258% ( 3) 00:34:17.458 5719.505 - 5749.295: 99.9273% ( 1) 00:34:17.458 5749.295 - 5779.084: 99.9288% ( 1) 00:34:17.458 5779.084 - 5808.873: 99.9302% ( 1) 00:34:17.458 5808.873 - 5838.662: 99.9332% ( 2) 00:34:17.458 5868.451 - 5898.240: 99.9347% ( 1) 00:34:17.458 5898.240 - 5928.029: 99.9377% ( 2) 00:34:17.458 5928.029 - 5957.818: 99.9392% ( 1) 00:34:17.458 5957.818 - 5987.607: 99.9406% ( 1) 00:34:17.458 5987.607 - 6017.396: 99.9421% ( 1) 00:34:17.458 6017.396 - 6047.185: 99.9436% ( 1) 00:34:17.458 6047.185 - 6076.975: 99.9466% ( 2) 00:34:17.458 6076.975 - 6106.764: 99.9481% ( 1) 00:34:17.458 6106.764 - 6136.553: 99.9495% ( 1) 00:34:17.458 6136.553 - 6166.342: 99.9510% ( 1) 00:34:17.458 6166.342 - 6196.131: 99.9525% ( 1) 00:34:17.458 6225.920 - 6255.709: 99.9555% ( 2) 00:34:17.458 6464.233 - 6494.022: 99.9584% ( 2) 00:34:17.458 6494.022 - 6523.811: 99.9599% ( 1) 00:34:17.458 6911.069 - 6940.858: 99.9763% ( 11) 00:34:17.458 7626.007 - 7685.585: 99.9792% ( 2) 00:34:17.458 8519.680 - 8579.258: 99.9837% ( 3) 00:34:17.458 8817.571 - 8877.149: 99.9852% ( 1) 00:34:17.458 9651.665 - 9711.244: 99.9881% ( 2) 00:34:17.458 9889.978 - 9949.556: 99.9896% ( 1) 00:34:17.458 9949.556 - 10009.135: 99.9941% ( 3) 00:34:17.458 10009.135 - 10068.713: 99.9955% ( 1) 00:34:17.458 10068.713 - 10128.291: 99.9970% ( 1) 00:34:17.458 10128.291 - 10187.869: 100.0000% ( 2) 00:34:17.458 00:34:17.458 ************************************ 00:34:17.458 END TEST nvme_perf 00:34:17.458 ************************************ 00:34:17.458 13:18:21 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:34:17.458 00:34:17.458 real 0m2.680s 00:34:17.458 user 0m2.248s 00:34:17.458 sys 0m0.279s 00:34:17.458 13:18:21 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:17.458 13:18:21 -- common/autotest_common.sh@10 -- # set +x 00:34:17.458 13:18:21 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:34:17.458 13:18:21 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:34:17.458 13:18:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:17.458 13:18:21 -- common/autotest_common.sh@10 -- # set +x 00:34:17.458 ************************************ 00:34:17.458 START TEST nvme_hello_world 00:34:17.458 ************************************ 00:34:17.458 13:18:21 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:34:17.717 Initializing NVMe Controllers 00:34:17.717 Attached to 0000:00:10.0 00:34:17.717 Namespace ID: 1 size: 5GB 00:34:17.717 Initialization complete. 00:34:17.717 INFO: using host memory buffer for IO 00:34:17.717 Hello world! 00:34:17.717 00:34:17.717 real 0m0.320s 00:34:17.717 user 0m0.114s 00:34:17.717 ************************************ 00:34:17.717 END TEST nvme_hello_world 00:34:17.717 ************************************ 00:34:17.717 sys 0m0.131s 00:34:17.717 13:18:21 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:17.717 13:18:21 -- common/autotest_common.sh@10 -- # set +x 00:34:17.717 13:18:21 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:34:17.717 13:18:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:17.717 13:18:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:17.717 13:18:21 -- common/autotest_common.sh@10 -- # set +x 00:34:17.976 ************************************ 00:34:17.976 START TEST nvme_sgl 00:34:17.976 ************************************ 00:34:17.976 13:18:21 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:34:18.235 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:34:18.235 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:34:18.235 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:34:18.235 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:34:18.235 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:34:18.235 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:34:18.235 NVMe Readv/Writev Request test 00:34:18.235 Attached to 0000:00:10.0 00:34:18.235 0000:00:10.0: build_io_request_2 test passed 00:34:18.235 0000:00:10.0: build_io_request_4 test passed 00:34:18.235 0000:00:10.0: build_io_request_5 test passed 00:34:18.235 0000:00:10.0: build_io_request_6 test passed 00:34:18.235 0000:00:10.0: build_io_request_7 test passed 00:34:18.235 0000:00:10.0: build_io_request_10 test passed 00:34:18.235 Cleaning up... 00:34:18.235 ************************************ 00:34:18.235 END TEST nvme_sgl 00:34:18.235 ************************************ 00:34:18.235 00:34:18.235 real 0m0.378s 00:34:18.235 user 0m0.153s 00:34:18.235 sys 0m0.140s 00:34:18.235 13:18:22 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:18.235 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.235 13:18:22 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:34:18.235 13:18:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:18.235 13:18:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:18.235 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.235 ************************************ 00:34:18.235 START TEST nvme_e2edp 00:34:18.235 ************************************ 00:34:18.235 13:18:22 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:34:18.494 NVMe Write/Read with End-to-End data protection test 00:34:18.494 Attached to 0000:00:10.0 00:34:18.494 Cleaning up... 00:34:18.753 ************************************ 00:34:18.753 END TEST nvme_e2edp 00:34:18.753 ************************************ 00:34:18.753 00:34:18.753 real 0m0.307s 00:34:18.753 user 0m0.097s 00:34:18.753 sys 0m0.139s 00:34:18.753 13:18:22 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:18.753 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.753 13:18:22 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:34:18.753 13:18:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:18.753 13:18:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:18.753 13:18:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.753 ************************************ 00:34:18.753 START TEST nvme_reserve 00:34:18.753 ************************************ 00:34:18.753 13:18:22 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:34:19.012 ===================================================== 00:34:19.012 NVMe Controller at PCI bus 0, device 16, function 0 00:34:19.012 ===================================================== 00:34:19.012 Reservations: Not Supported 00:34:19.012 Reservation test passed 00:34:19.012 ************************************ 00:34:19.012 END TEST nvme_reserve 00:34:19.012 ************************************ 00:34:19.012 00:34:19.012 real 0m0.324s 00:34:19.012 user 0m0.126s 00:34:19.012 sys 0m0.112s 00:34:19.012 13:18:23 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:19.012 13:18:23 -- common/autotest_common.sh@10 -- # set +x 00:34:19.012 13:18:23 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:34:19.012 13:18:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:19.012 13:18:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:19.012 13:18:23 -- common/autotest_common.sh@10 -- # set +x 00:34:19.012 ************************************ 00:34:19.012 START TEST nvme_err_injection 00:34:19.012 ************************************ 00:34:19.012 13:18:23 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:34:19.579 NVMe Error Injection test 00:34:19.579 Attached to 0000:00:10.0 00:34:19.579 0000:00:10.0: get features failed as expected 00:34:19.579 0000:00:10.0: get features successfully as expected 00:34:19.579 0000:00:10.0: read failed as expected 00:34:19.579 0000:00:10.0: read successfully as expected 00:34:19.579 Cleaning up... 00:34:19.579 ************************************ 00:34:19.579 END TEST nvme_err_injection 00:34:19.579 ************************************ 00:34:19.579 00:34:19.579 real 0m0.313s 00:34:19.579 user 0m0.130s 00:34:19.579 sys 0m0.113s 00:34:19.579 13:18:23 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:19.579 13:18:23 -- common/autotest_common.sh@10 -- # set +x 00:34:19.579 13:18:23 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:34:19.579 13:18:23 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:34:19.579 13:18:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:19.579 13:18:23 -- common/autotest_common.sh@10 -- # set +x 00:34:19.579 ************************************ 00:34:19.579 START TEST nvme_overhead 00:34:19.579 ************************************ 00:34:19.579 13:18:23 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:34:20.957 Initializing NVMe Controllers 00:34:20.957 Attached to 0000:00:10.0 00:34:20.957 Initialization complete. Launching workers. 00:34:20.957 submit (in ns) avg, min, max = 14915.2, 11968.2, 128329.1 00:34:20.957 complete (in ns) avg, min, max = 9819.8, 8306.4, 79750.0 00:34:20.957 00:34:20.957 Submit histogram 00:34:20.957 ================ 00:34:20.957 Range in us Cumulative Count 00:34:20.957 11.927 - 11.985: 0.0101% ( 1) 00:34:20.957 11.985 - 12.044: 0.0404% ( 3) 00:34:20.957 12.044 - 12.102: 0.0809% ( 4) 00:34:20.957 12.102 - 12.160: 0.0910% ( 1) 00:34:20.957 12.160 - 12.218: 0.1314% ( 4) 00:34:20.958 12.218 - 12.276: 0.1415% ( 1) 00:34:20.958 12.276 - 12.335: 0.2022% ( 6) 00:34:20.958 12.335 - 12.393: 0.3033% ( 10) 00:34:20.958 12.393 - 12.451: 0.4044% ( 10) 00:34:20.958 12.451 - 12.509: 0.4752% ( 7) 00:34:20.958 12.509 - 12.567: 0.5257% ( 5) 00:34:20.958 12.567 - 12.625: 0.5460% ( 2) 00:34:20.958 12.625 - 12.684: 0.6875% ( 14) 00:34:20.958 12.684 - 12.742: 0.8594% ( 17) 00:34:20.958 12.742 - 12.800: 1.0211% ( 16) 00:34:20.958 12.800 - 12.858: 1.2840% ( 26) 00:34:20.958 12.858 - 12.916: 1.5570% ( 27) 00:34:20.958 12.916 - 12.975: 2.2242% ( 66) 00:34:20.958 12.975 - 13.033: 4.1452% ( 190) 00:34:20.958 13.033 - 13.091: 7.3097% ( 313) 00:34:20.958 13.091 - 13.149: 11.3133% ( 396) 00:34:20.958 13.149 - 13.207: 15.0440% ( 369) 00:34:20.958 13.207 - 13.265: 17.7131% ( 264) 00:34:20.958 13.265 - 13.324: 20.1699% ( 243) 00:34:20.958 13.324 - 13.382: 22.1009% ( 191) 00:34:20.958 13.382 - 13.440: 23.2332% ( 112) 00:34:20.958 13.440 - 13.498: 23.8904% ( 65) 00:34:20.958 13.498 - 13.556: 24.2544% ( 36) 00:34:20.958 13.556 - 13.615: 24.5375% ( 28) 00:34:20.958 13.615 - 13.673: 25.0733% ( 53) 00:34:20.958 13.673 - 13.731: 26.3169% ( 123) 00:34:20.958 13.731 - 13.789: 29.9464% ( 359) 00:34:20.958 13.789 - 13.847: 37.0943% ( 707) 00:34:20.958 13.847 - 13.905: 45.8700% ( 868) 00:34:20.958 13.905 - 13.964: 53.2504% ( 730) 00:34:20.958 13.964 - 14.022: 58.8919% ( 558) 00:34:20.958 14.022 - 14.080: 63.3404% ( 440) 00:34:20.958 14.080 - 14.138: 66.6667% ( 329) 00:34:20.958 14.138 - 14.196: 68.7898% ( 210) 00:34:20.958 14.196 - 14.255: 70.3670% ( 156) 00:34:20.958 14.255 - 14.313: 71.5701% ( 119) 00:34:20.958 14.313 - 14.371: 72.5407% ( 96) 00:34:20.958 14.371 - 14.429: 73.0664% ( 52) 00:34:20.958 14.429 - 14.487: 73.6225% ( 55) 00:34:20.958 14.487 - 14.545: 74.0370% ( 41) 00:34:20.958 14.545 - 14.604: 74.3403% ( 30) 00:34:20.958 14.604 - 14.662: 74.5830% ( 24) 00:34:20.958 14.662 - 14.720: 74.8357% ( 25) 00:34:20.958 14.720 - 14.778: 75.0278% ( 19) 00:34:20.958 14.778 - 14.836: 75.1592% ( 13) 00:34:20.958 14.836 - 14.895: 75.2907% ( 13) 00:34:20.958 14.895 - 15.011: 75.4828% ( 19) 00:34:20.958 15.011 - 15.127: 75.7153% ( 23) 00:34:20.958 15.127 - 15.244: 75.8872% ( 17) 00:34:20.958 15.244 - 15.360: 75.9782% ( 9) 00:34:20.958 15.360 - 15.476: 76.0590% ( 8) 00:34:20.958 15.476 - 15.593: 76.0995% ( 4) 00:34:20.958 15.593 - 15.709: 76.1905% ( 9) 00:34:20.958 15.709 - 15.825: 76.2309% ( 4) 00:34:20.958 15.825 - 15.942: 76.2612% ( 3) 00:34:20.958 15.942 - 16.058: 76.2916% ( 3) 00:34:20.958 16.058 - 16.175: 76.3320% ( 4) 00:34:20.958 16.175 - 16.291: 76.3522% ( 2) 00:34:20.958 16.291 - 16.407: 76.3927% ( 4) 00:34:20.958 16.407 - 16.524: 76.4028% ( 1) 00:34:20.958 16.524 - 16.640: 76.4635% ( 6) 00:34:20.958 16.640 - 16.756: 76.6555% ( 19) 00:34:20.958 16.756 - 16.873: 78.8090% ( 213) 00:34:20.958 16.873 - 16.989: 84.3595% ( 549) 00:34:20.958 16.989 - 17.105: 88.6058% ( 420) 00:34:20.958 17.105 - 17.222: 90.4357% ( 181) 00:34:20.958 17.222 - 17.338: 91.6186% ( 117) 00:34:20.958 17.338 - 17.455: 92.7510% ( 112) 00:34:20.958 17.455 - 17.571: 93.3070% ( 55) 00:34:20.958 17.571 - 17.687: 93.6609% ( 35) 00:34:20.958 17.687 - 17.804: 94.0148% ( 35) 00:34:20.958 17.804 - 17.920: 94.2271% ( 21) 00:34:20.958 17.920 - 18.036: 94.4697% ( 24) 00:34:20.958 18.036 - 18.153: 94.7225% ( 25) 00:34:20.958 18.153 - 18.269: 94.9651% ( 24) 00:34:20.958 18.269 - 18.385: 95.1471% ( 18) 00:34:20.958 18.385 - 18.502: 95.2583% ( 11) 00:34:20.958 18.502 - 18.618: 95.3796% ( 12) 00:34:20.958 18.618 - 18.735: 95.4605% ( 8) 00:34:20.958 18.735 - 18.851: 95.5111% ( 5) 00:34:20.958 18.851 - 18.967: 95.6122% ( 10) 00:34:20.958 18.967 - 19.084: 95.6931% ( 8) 00:34:20.958 19.084 - 19.200: 95.7537% ( 6) 00:34:20.958 19.200 - 19.316: 95.7840% ( 3) 00:34:20.958 19.316 - 19.433: 95.8649% ( 8) 00:34:20.958 19.433 - 19.549: 95.9863% ( 12) 00:34:20.958 19.549 - 19.665: 96.0874% ( 10) 00:34:20.958 19.665 - 19.782: 96.1379% ( 5) 00:34:20.958 19.782 - 19.898: 96.2087% ( 7) 00:34:20.958 19.898 - 20.015: 96.2693% ( 6) 00:34:20.958 20.015 - 20.131: 96.3300% ( 6) 00:34:20.958 20.131 - 20.247: 96.4210% ( 9) 00:34:20.958 20.247 - 20.364: 96.4614% ( 4) 00:34:20.958 20.364 - 20.480: 96.5423% ( 8) 00:34:20.958 20.480 - 20.596: 96.5929% ( 5) 00:34:20.958 20.596 - 20.713: 96.6636% ( 7) 00:34:20.958 20.713 - 20.829: 96.7445% ( 8) 00:34:20.958 20.829 - 20.945: 96.7850% ( 4) 00:34:20.958 20.945 - 21.062: 96.8456% ( 6) 00:34:20.958 21.062 - 21.178: 96.9265% ( 8) 00:34:20.958 21.178 - 21.295: 96.9669% ( 4) 00:34:20.958 21.295 - 21.411: 97.0175% ( 5) 00:34:20.958 21.411 - 21.527: 97.0883% ( 7) 00:34:20.958 21.527 - 21.644: 97.1489% ( 6) 00:34:20.958 21.644 - 21.760: 97.1793% ( 3) 00:34:20.958 21.760 - 21.876: 97.2702% ( 9) 00:34:20.958 21.876 - 21.993: 97.3511% ( 8) 00:34:20.958 21.993 - 22.109: 97.4017% ( 5) 00:34:20.958 22.109 - 22.225: 97.4118% ( 1) 00:34:20.958 22.225 - 22.342: 97.4522% ( 4) 00:34:20.958 22.342 - 22.458: 97.5028% ( 5) 00:34:20.958 22.458 - 22.575: 97.5533% ( 5) 00:34:20.958 22.575 - 22.691: 97.6544% ( 10) 00:34:20.958 22.691 - 22.807: 97.6747% ( 2) 00:34:20.958 22.807 - 22.924: 97.7151% ( 4) 00:34:20.958 22.924 - 23.040: 97.7758% ( 6) 00:34:20.958 23.040 - 23.156: 97.8162% ( 4) 00:34:20.958 23.156 - 23.273: 97.8870% ( 7) 00:34:20.958 23.273 - 23.389: 97.9577% ( 7) 00:34:20.958 23.389 - 23.505: 98.0285% ( 7) 00:34:20.958 23.505 - 23.622: 98.0690% ( 4) 00:34:20.958 23.622 - 23.738: 98.1094% ( 4) 00:34:20.958 23.738 - 23.855: 98.1802% ( 7) 00:34:20.958 23.855 - 23.971: 98.2105% ( 3) 00:34:20.958 23.971 - 24.087: 98.2509% ( 4) 00:34:20.958 24.087 - 24.204: 98.2813% ( 3) 00:34:20.958 24.204 - 24.320: 98.3217% ( 4) 00:34:20.958 24.320 - 24.436: 98.3723% ( 5) 00:34:20.958 24.436 - 24.553: 98.4127% ( 4) 00:34:20.958 24.553 - 24.669: 98.4734% ( 6) 00:34:20.958 24.669 - 24.785: 98.4936% ( 2) 00:34:20.958 24.785 - 24.902: 98.5239% ( 3) 00:34:20.958 24.902 - 25.018: 98.5542% ( 3) 00:34:20.958 25.018 - 25.135: 98.6149% ( 6) 00:34:20.958 25.135 - 25.251: 98.6452% ( 3) 00:34:20.958 25.251 - 25.367: 98.7059% ( 6) 00:34:20.958 25.367 - 25.484: 98.7564% ( 5) 00:34:20.958 25.484 - 25.600: 98.7767% ( 2) 00:34:20.958 25.600 - 25.716: 98.8070% ( 3) 00:34:20.958 25.716 - 25.833: 98.8272% ( 2) 00:34:20.958 25.833 - 25.949: 98.8373% ( 1) 00:34:20.958 26.065 - 26.182: 98.8474% ( 1) 00:34:20.958 26.298 - 26.415: 98.8980% ( 5) 00:34:20.958 26.415 - 26.531: 98.9081% ( 1) 00:34:20.958 26.531 - 26.647: 98.9283% ( 2) 00:34:20.958 26.647 - 26.764: 98.9384% ( 1) 00:34:20.958 26.764 - 26.880: 98.9789% ( 4) 00:34:20.958 26.880 - 26.996: 98.9991% ( 2) 00:34:20.958 26.996 - 27.113: 99.0193% ( 2) 00:34:20.958 27.229 - 27.345: 99.0496% ( 3) 00:34:20.958 27.345 - 27.462: 99.0699% ( 2) 00:34:20.958 27.462 - 27.578: 99.0800% ( 1) 00:34:20.958 27.578 - 27.695: 99.1103% ( 3) 00:34:20.958 27.695 - 27.811: 99.1609% ( 5) 00:34:20.958 27.811 - 27.927: 99.1912% ( 3) 00:34:20.958 27.927 - 28.044: 99.2114% ( 2) 00:34:20.958 28.044 - 28.160: 99.2417% ( 3) 00:34:20.958 28.160 - 28.276: 99.2721% ( 3) 00:34:20.958 28.276 - 28.393: 99.2822% ( 1) 00:34:20.958 28.393 - 28.509: 99.3226% ( 4) 00:34:20.958 28.509 - 28.625: 99.3732% ( 5) 00:34:20.958 28.625 - 28.742: 99.3934% ( 2) 00:34:20.958 28.742 - 28.858: 99.4439% ( 5) 00:34:20.958 28.858 - 28.975: 99.4540% ( 1) 00:34:20.958 28.975 - 29.091: 99.4844% ( 3) 00:34:20.958 29.091 - 29.207: 99.5147% ( 3) 00:34:20.958 29.324 - 29.440: 99.5349% ( 2) 00:34:20.958 29.440 - 29.556: 99.5450% ( 1) 00:34:20.958 29.673 - 29.789: 99.5653% ( 2) 00:34:20.958 29.789 - 30.022: 99.6057% ( 4) 00:34:20.958 30.022 - 30.255: 99.6259% ( 2) 00:34:20.958 30.255 - 30.487: 99.6563% ( 3) 00:34:20.958 30.487 - 30.720: 99.6664% ( 1) 00:34:20.958 30.720 - 30.953: 99.6765% ( 1) 00:34:20.958 30.953 - 31.185: 99.6866% ( 1) 00:34:20.958 31.185 - 31.418: 99.7068% ( 2) 00:34:20.958 31.418 - 31.651: 99.7169% ( 1) 00:34:20.958 31.651 - 31.884: 99.7371% ( 2) 00:34:20.958 31.884 - 32.116: 99.7472% ( 1) 00:34:20.958 32.116 - 32.349: 99.7574% ( 1) 00:34:20.958 32.349 - 32.582: 99.7675% ( 1) 00:34:20.958 33.047 - 33.280: 99.7776% ( 1) 00:34:20.958 33.513 - 33.745: 99.7978% ( 2) 00:34:20.958 33.978 - 34.211: 99.8079% ( 1) 00:34:20.958 35.375 - 35.607: 99.8180% ( 1) 00:34:20.958 35.840 - 36.073: 99.8281% ( 1) 00:34:20.958 36.771 - 37.004: 99.8382% ( 1) 00:34:20.958 39.331 - 39.564: 99.8483% ( 1) 00:34:20.958 40.960 - 41.193: 99.8585% ( 1) 00:34:20.958 52.829 - 53.062: 99.8686% ( 1) 00:34:20.958 57.484 - 57.716: 99.8787% ( 1) 00:34:20.958 65.629 - 66.095: 99.8888% ( 1) 00:34:20.959 67.491 - 67.956: 99.8989% ( 1) 00:34:20.959 68.887 - 69.353: 99.9090% ( 1) 00:34:20.959 69.353 - 69.818: 99.9191% ( 1) 00:34:20.959 76.800 - 77.265: 99.9292% ( 1) 00:34:20.959 78.662 - 79.127: 99.9393% ( 1) 00:34:20.959 80.989 - 81.455: 99.9494% ( 1) 00:34:20.959 81.455 - 81.920: 99.9596% ( 1) 00:34:20.959 84.713 - 85.178: 99.9697% ( 1) 00:34:20.959 87.505 - 87.971: 99.9798% ( 1) 00:34:20.959 120.087 - 121.018: 99.9899% ( 1) 00:34:20.959 127.535 - 128.465: 100.0000% ( 1) 00:34:20.959 00:34:20.959 Complete histogram 00:34:20.959 ================== 00:34:20.959 Range in us Cumulative Count 00:34:20.959 8.262 - 8.320: 0.0101% ( 1) 00:34:20.959 8.320 - 8.378: 0.0506% ( 4) 00:34:20.959 8.378 - 8.436: 0.0607% ( 1) 00:34:20.959 8.436 - 8.495: 0.1011% ( 4) 00:34:20.959 8.495 - 8.553: 0.1618% ( 6) 00:34:20.959 8.553 - 8.611: 0.2426% ( 8) 00:34:20.959 8.611 - 8.669: 1.1930% ( 94) 00:34:20.959 8.669 - 8.727: 11.8087% ( 1050) 00:34:20.959 8.727 - 8.785: 28.9556% ( 1696) 00:34:20.959 8.785 - 8.844: 39.0254% ( 996) 00:34:20.959 8.844 - 8.902: 43.7772% ( 470) 00:34:20.959 8.902 - 8.960: 45.9003% ( 210) 00:34:20.959 8.960 - 9.018: 47.3663% ( 145) 00:34:20.959 9.018 - 9.076: 50.0455% ( 265) 00:34:20.959 9.076 - 9.135: 54.8175% ( 472) 00:34:20.959 9.135 - 9.193: 60.4186% ( 554) 00:34:20.959 9.193 - 9.251: 64.5840% ( 412) 00:34:20.959 9.251 - 9.309: 67.4553% ( 284) 00:34:20.959 9.309 - 9.367: 69.2245% ( 175) 00:34:20.959 9.367 - 9.425: 70.5591% ( 132) 00:34:20.959 9.425 - 9.484: 71.6813% ( 111) 00:34:20.959 9.484 - 9.542: 72.4699% ( 78) 00:34:20.959 9.542 - 9.600: 73.1271% ( 65) 00:34:20.959 9.600 - 9.658: 73.6528% ( 52) 00:34:20.959 9.658 - 9.716: 73.9865% ( 33) 00:34:20.959 9.716 - 9.775: 74.2796% ( 29) 00:34:20.959 9.775 - 9.833: 74.4515% ( 17) 00:34:20.959 9.833 - 9.891: 74.5931% ( 14) 00:34:20.959 9.891 - 9.949: 74.7346% ( 14) 00:34:20.959 9.949 - 10.007: 74.9065% ( 17) 00:34:20.959 10.007 - 10.065: 75.0076% ( 10) 00:34:20.959 10.065 - 10.124: 75.0581% ( 5) 00:34:20.959 10.124 - 10.182: 75.1390% ( 8) 00:34:20.959 10.182 - 10.240: 75.1795% ( 4) 00:34:20.959 10.240 - 10.298: 75.2098% ( 3) 00:34:20.959 10.298 - 10.356: 75.2401% ( 3) 00:34:20.959 10.356 - 10.415: 75.3008% ( 6) 00:34:20.959 10.415 - 10.473: 75.3918% ( 9) 00:34:20.959 10.473 - 10.531: 75.4423% ( 5) 00:34:20.959 10.531 - 10.589: 75.5030% ( 6) 00:34:20.959 10.589 - 10.647: 75.5940% ( 9) 00:34:20.959 10.647 - 10.705: 75.6243% ( 3) 00:34:20.959 10.705 - 10.764: 75.6647% ( 4) 00:34:20.959 10.764 - 10.822: 75.7153% ( 5) 00:34:20.959 10.822 - 10.880: 75.9782% ( 26) 00:34:20.959 10.880 - 10.938: 78.3338% ( 233) 00:34:20.959 10.938 - 10.996: 83.8338% ( 544) 00:34:20.959 10.996 - 11.055: 88.7878% ( 490) 00:34:20.959 11.055 - 11.113: 90.9109% ( 210) 00:34:20.959 11.113 - 11.171: 92.1039% ( 118) 00:34:20.959 11.171 - 11.229: 92.7308% ( 62) 00:34:20.959 11.229 - 11.287: 93.2666% ( 53) 00:34:20.959 11.287 - 11.345: 93.5194% ( 25) 00:34:20.959 11.345 - 11.404: 93.7013% ( 18) 00:34:20.959 11.404 - 11.462: 93.8328% ( 13) 00:34:20.959 11.462 - 11.520: 93.8934% ( 6) 00:34:20.959 11.520 - 11.578: 93.9541% ( 6) 00:34:20.959 11.578 - 11.636: 94.0047% ( 5) 00:34:20.959 11.636 - 11.695: 94.1058% ( 10) 00:34:20.959 11.695 - 11.753: 94.2069% ( 10) 00:34:20.959 11.753 - 11.811: 94.3080% ( 10) 00:34:20.959 11.811 - 11.869: 94.3484% ( 4) 00:34:20.959 11.869 - 11.927: 94.3888% ( 4) 00:34:20.959 11.927 - 11.985: 94.4394% ( 5) 00:34:20.959 11.985 - 12.044: 94.4798% ( 4) 00:34:20.959 12.044 - 12.102: 94.5304% ( 5) 00:34:20.959 12.102 - 12.160: 94.5708% ( 4) 00:34:20.959 12.160 - 12.218: 94.6214% ( 5) 00:34:20.959 12.218 - 12.276: 94.7124% ( 9) 00:34:20.959 12.276 - 12.335: 94.7932% ( 8) 00:34:20.959 12.335 - 12.393: 94.8337% ( 4) 00:34:20.959 12.393 - 12.451: 94.8842% ( 5) 00:34:20.959 12.451 - 12.509: 94.9348% ( 5) 00:34:20.959 12.509 - 12.567: 94.9651% ( 3) 00:34:20.959 12.567 - 12.625: 95.0056% ( 4) 00:34:20.959 12.625 - 12.684: 95.0359% ( 3) 00:34:20.959 12.684 - 12.742: 95.0763% ( 4) 00:34:20.959 12.742 - 12.800: 95.0966% ( 2) 00:34:20.959 12.800 - 12.858: 95.1067% ( 1) 00:34:20.959 12.858 - 12.916: 95.1370% ( 3) 00:34:20.959 12.916 - 12.975: 95.1774% ( 4) 00:34:20.959 12.975 - 13.033: 95.2179% ( 4) 00:34:20.959 13.033 - 13.091: 95.2583% ( 4) 00:34:20.959 13.091 - 13.149: 95.2886% ( 3) 00:34:20.959 13.149 - 13.207: 95.3190% ( 3) 00:34:20.959 13.207 - 13.265: 95.3392% ( 2) 00:34:20.959 13.265 - 13.324: 95.3999% ( 6) 00:34:20.959 13.324 - 13.382: 95.4605% ( 6) 00:34:20.959 13.382 - 13.440: 95.4909% ( 3) 00:34:20.959 13.440 - 13.498: 95.5414% ( 5) 00:34:20.959 13.498 - 13.556: 95.5818% ( 4) 00:34:20.959 13.556 - 13.615: 95.6425% ( 6) 00:34:20.959 13.615 - 13.673: 95.7032% ( 6) 00:34:20.959 13.673 - 13.731: 95.7234% ( 2) 00:34:20.959 13.731 - 13.789: 95.7739% ( 5) 00:34:20.959 13.789 - 13.847: 95.8144% ( 4) 00:34:20.959 13.847 - 13.905: 95.9256% ( 11) 00:34:20.959 13.905 - 13.964: 95.9761% ( 5) 00:34:20.959 13.964 - 14.022: 96.0267% ( 5) 00:34:20.959 14.022 - 14.080: 96.0570% ( 3) 00:34:20.959 14.080 - 14.138: 96.1177% ( 6) 00:34:20.959 14.138 - 14.196: 96.1783% ( 6) 00:34:20.959 14.196 - 14.255: 96.2188% ( 4) 00:34:20.959 14.255 - 14.313: 96.2491% ( 3) 00:34:20.959 14.313 - 14.371: 96.2896% ( 4) 00:34:20.959 14.371 - 14.429: 96.2997% ( 1) 00:34:20.959 14.487 - 14.545: 96.3300% ( 3) 00:34:20.959 14.604 - 14.662: 96.3401% ( 1) 00:34:20.959 14.662 - 14.720: 96.3704% ( 3) 00:34:20.959 14.778 - 14.836: 96.4109% ( 4) 00:34:20.959 14.836 - 14.895: 96.4210% ( 1) 00:34:20.959 14.895 - 15.011: 96.4816% ( 6) 00:34:20.959 15.011 - 15.127: 96.5828% ( 10) 00:34:20.959 15.127 - 15.244: 96.6131% ( 3) 00:34:20.959 15.244 - 15.360: 96.6737% ( 6) 00:34:20.959 15.360 - 15.476: 96.7546% ( 8) 00:34:20.959 15.476 - 15.593: 96.8658% ( 11) 00:34:20.959 15.593 - 15.709: 96.9366% ( 7) 00:34:20.959 15.709 - 15.825: 97.0478% ( 11) 00:34:20.959 15.825 - 15.942: 97.1287% ( 8) 00:34:20.959 15.942 - 16.058: 97.1894% ( 6) 00:34:20.959 16.058 - 16.175: 97.2905% ( 10) 00:34:20.959 16.175 - 16.291: 97.3612% ( 7) 00:34:20.959 16.291 - 16.407: 97.4826% ( 12) 00:34:20.959 16.407 - 16.524: 97.5634% ( 8) 00:34:20.959 16.524 - 16.640: 97.6747% ( 11) 00:34:20.959 16.640 - 16.756: 97.7656% ( 9) 00:34:20.959 16.756 - 16.873: 97.8263% ( 6) 00:34:20.959 16.873 - 16.989: 97.9072% ( 8) 00:34:20.959 16.989 - 17.105: 97.9881% ( 8) 00:34:20.959 17.105 - 17.222: 98.0690% ( 8) 00:34:20.959 17.222 - 17.338: 98.1498% ( 8) 00:34:20.959 17.338 - 17.455: 98.2206% ( 7) 00:34:20.959 17.455 - 17.571: 98.2712% ( 5) 00:34:20.959 17.571 - 17.687: 98.3520% ( 8) 00:34:20.959 17.687 - 17.804: 98.4026% ( 5) 00:34:20.959 17.804 - 17.920: 98.4329% ( 3) 00:34:20.959 17.920 - 18.036: 98.4835% ( 5) 00:34:20.959 18.036 - 18.153: 98.5441% ( 6) 00:34:20.959 18.153 - 18.269: 98.5846% ( 4) 00:34:20.959 18.269 - 18.385: 98.6149% ( 3) 00:34:20.959 18.385 - 18.502: 98.6553% ( 4) 00:34:20.959 18.502 - 18.618: 98.7362% ( 8) 00:34:20.959 18.618 - 18.735: 98.7868% ( 5) 00:34:20.959 18.735 - 18.851: 98.7969% ( 1) 00:34:20.959 18.851 - 18.967: 98.8272% ( 3) 00:34:20.959 18.967 - 19.084: 98.8677% ( 4) 00:34:20.959 19.084 - 19.200: 98.8879% ( 2) 00:34:20.959 19.200 - 19.316: 98.9283% ( 4) 00:34:20.959 19.549 - 19.665: 98.9384% ( 1) 00:34:20.959 19.665 - 19.782: 98.9586% ( 2) 00:34:20.959 19.782 - 19.898: 98.9688% ( 1) 00:34:20.959 19.898 - 20.015: 98.9890% ( 2) 00:34:20.959 20.015 - 20.131: 98.9991% ( 1) 00:34:20.959 20.247 - 20.364: 99.0294% ( 3) 00:34:20.959 20.364 - 20.480: 99.0395% ( 1) 00:34:20.959 20.480 - 20.596: 99.0496% ( 1) 00:34:20.959 20.596 - 20.713: 99.0800% ( 3) 00:34:20.959 20.713 - 20.829: 99.0901% ( 1) 00:34:20.959 20.829 - 20.945: 99.1103% ( 2) 00:34:20.959 20.945 - 21.062: 99.1305% ( 2) 00:34:20.959 21.062 - 21.178: 99.1507% ( 2) 00:34:20.959 21.178 - 21.295: 99.1609% ( 1) 00:34:20.959 21.295 - 21.411: 99.1710% ( 1) 00:34:20.959 21.411 - 21.527: 99.1811% ( 1) 00:34:20.959 21.760 - 21.876: 99.2114% ( 3) 00:34:20.959 21.876 - 21.993: 99.2316% ( 2) 00:34:20.959 22.109 - 22.225: 99.2518% ( 2) 00:34:20.959 22.342 - 22.458: 99.2620% ( 1) 00:34:20.959 22.575 - 22.691: 99.2721% ( 1) 00:34:20.959 22.691 - 22.807: 99.2923% ( 2) 00:34:20.959 22.924 - 23.040: 99.3024% ( 1) 00:34:20.959 23.156 - 23.273: 99.3125% ( 1) 00:34:20.959 23.273 - 23.389: 99.3327% ( 2) 00:34:20.959 23.389 - 23.505: 99.3631% ( 3) 00:34:20.959 23.505 - 23.622: 99.3732% ( 1) 00:34:20.959 23.622 - 23.738: 99.3833% ( 1) 00:34:20.959 23.738 - 23.855: 99.4035% ( 2) 00:34:20.959 23.855 - 23.971: 99.4338% ( 3) 00:34:20.960 23.971 - 24.087: 99.4743% ( 4) 00:34:20.960 24.087 - 24.204: 99.5147% ( 4) 00:34:20.960 24.204 - 24.320: 99.5653% ( 5) 00:34:20.960 24.320 - 24.436: 99.6057% ( 4) 00:34:20.960 24.436 - 24.553: 99.6158% ( 1) 00:34:20.960 24.553 - 24.669: 99.6360% ( 2) 00:34:20.960 24.785 - 24.902: 99.6563% ( 2) 00:34:20.960 25.018 - 25.135: 99.6664% ( 1) 00:34:20.960 25.135 - 25.251: 99.6866% ( 2) 00:34:20.960 25.367 - 25.484: 99.7270% ( 4) 00:34:20.960 25.484 - 25.600: 99.7371% ( 1) 00:34:20.960 25.716 - 25.833: 99.7472% ( 1) 00:34:20.960 26.065 - 26.182: 99.7675% ( 2) 00:34:20.960 26.182 - 26.298: 99.7776% ( 1) 00:34:20.960 26.415 - 26.531: 99.7978% ( 2) 00:34:20.960 26.647 - 26.764: 99.8180% ( 2) 00:34:20.960 27.229 - 27.345: 99.8281% ( 1) 00:34:20.960 27.578 - 27.695: 99.8382% ( 1) 00:34:20.960 27.695 - 27.811: 99.8483% ( 1) 00:34:20.960 27.927 - 28.044: 99.8585% ( 1) 00:34:20.960 28.858 - 28.975: 99.8686% ( 1) 00:34:20.960 29.091 - 29.207: 99.8787% ( 1) 00:34:20.960 30.953 - 31.185: 99.8888% ( 1) 00:34:20.960 31.418 - 31.651: 99.8989% ( 1) 00:34:20.960 33.280 - 33.513: 99.9090% ( 1) 00:34:20.960 34.211 - 34.444: 99.9191% ( 1) 00:34:20.960 35.375 - 35.607: 99.9292% ( 1) 00:34:20.960 35.840 - 36.073: 99.9393% ( 1) 00:34:20.960 40.029 - 40.262: 99.9494% ( 1) 00:34:20.960 42.822 - 43.055: 99.9596% ( 1) 00:34:20.960 50.502 - 50.735: 99.9697% ( 1) 00:34:20.960 56.320 - 56.553: 99.9798% ( 1) 00:34:20.960 56.785 - 57.018: 99.9899% ( 1) 00:34:20.960 79.593 - 80.058: 100.0000% ( 1) 00:34:20.960 00:34:20.960 ************************************ 00:34:20.960 END TEST nvme_overhead 00:34:20.960 ************************************ 00:34:20.960 00:34:20.960 real 0m1.304s 00:34:20.960 user 0m1.097s 00:34:20.960 sys 0m0.116s 00:34:20.960 13:18:24 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:20.960 13:18:24 -- common/autotest_common.sh@10 -- # set +x 00:34:20.960 13:18:24 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:34:20.960 13:18:24 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:34:20.960 13:18:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:20.960 13:18:24 -- common/autotest_common.sh@10 -- # set +x 00:34:20.960 ************************************ 00:34:20.960 START TEST nvme_arbitration 00:34:20.960 ************************************ 00:34:20.960 13:18:24 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:34:24.248 Initializing NVMe Controllers 00:34:24.248 Attached to 0000:00:10.0 00:34:24.248 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:34:24.248 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:34:24.248 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:34:24.248 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:34:24.248 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:34:24.248 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:34:24.248 Initialization complete. Launching workers. 00:34:24.248 Starting thread on core 1 with urgent priority queue 00:34:24.248 Starting thread on core 2 with urgent priority queue 00:34:24.248 Starting thread on core 3 with urgent priority queue 00:34:24.248 Starting thread on core 0 with urgent priority queue 00:34:24.248 QEMU NVMe Ctrl (12340 ) core 0: 1386.67 IO/s 72.12 secs/100000 ios 00:34:24.248 QEMU NVMe Ctrl (12340 ) core 1: 1237.33 IO/s 80.82 secs/100000 ios 00:34:24.248 QEMU NVMe Ctrl (12340 ) core 2: 725.33 IO/s 137.87 secs/100000 ios 00:34:24.248 QEMU NVMe Ctrl (12340 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:34:24.248 ======================================================== 00:34:24.248 00:34:24.248 ************************************ 00:34:24.248 END TEST nvme_arbitration 00:34:24.248 ************************************ 00:34:24.248 00:34:24.248 real 0m3.425s 00:34:24.248 user 0m9.335s 00:34:24.248 sys 0m0.140s 00:34:24.248 13:18:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:24.248 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:34:24.248 13:18:28 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:34:24.248 13:18:28 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:34:24.248 13:18:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:24.248 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:34:24.248 ************************************ 00:34:24.248 START TEST nvme_single_aen 00:34:24.248 ************************************ 00:34:24.248 13:18:28 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:34:24.816 Asynchronous Event Request test 00:34:24.816 Attached to 0000:00:10.0 00:34:24.816 Reset controller to setup AER completions for this process 00:34:24.816 Registering asynchronous event callbacks... 00:34:24.816 Getting orig temperature thresholds of all controllers 00:34:24.816 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:24.816 Setting all controllers temperature threshold low to trigger AER 00:34:24.816 Waiting for all controllers temperature threshold to be set lower 00:34:24.816 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:24.816 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:34:24.816 Waiting for all controllers to trigger AER and reset threshold 00:34:24.816 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:24.816 Cleaning up... 00:34:24.816 ************************************ 00:34:24.816 END TEST nvme_single_aen 00:34:24.816 ************************************ 00:34:24.816 00:34:24.816 real 0m0.307s 00:34:24.816 user 0m0.105s 00:34:24.816 sys 0m0.124s 00:34:24.816 13:18:28 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:24.816 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:34:24.816 13:18:28 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:34:24.816 13:18:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:24.816 13:18:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:24.816 13:18:28 -- common/autotest_common.sh@10 -- # set +x 00:34:24.816 ************************************ 00:34:24.816 START TEST nvme_doorbell_aers 00:34:24.816 ************************************ 00:34:24.816 13:18:28 -- common/autotest_common.sh@1099 -- # nvme_doorbell_aers 00:34:24.816 13:18:28 -- nvme/nvme.sh@70 -- # bdfs=() 00:34:24.816 13:18:28 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:34:24.816 13:18:28 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:34:24.816 13:18:28 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:34:24.816 13:18:28 -- common/autotest_common.sh@1487 -- # bdfs=() 00:34:24.816 13:18:28 -- common/autotest_common.sh@1487 -- # local bdfs 00:34:24.816 13:18:28 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:24.816 13:18:28 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:24.817 13:18:28 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:34:24.817 13:18:28 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:34:24.817 13:18:28 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:34:24.817 13:18:28 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:34:24.817 13:18:28 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:34:25.075 [2024-04-17 13:18:29.144259] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149649) is not found. Dropping the request. 00:34:35.052 Executing: test_write_invalid_db 00:34:35.052 Waiting for AER completion... 00:34:35.052 Failure: test_write_invalid_db 00:34:35.052 00:34:35.052 Executing: test_invalid_db_write_overflow_sq 00:34:35.052 Waiting for AER completion... 00:34:35.052 Failure: test_invalid_db_write_overflow_sq 00:34:35.052 00:34:35.052 Executing: test_invalid_db_write_overflow_cq 00:34:35.052 Waiting for AER completion... 00:34:35.052 Failure: test_invalid_db_write_overflow_cq 00:34:35.052 00:34:35.052 ************************************ 00:34:35.052 END TEST nvme_doorbell_aers 00:34:35.052 ************************************ 00:34:35.052 00:34:35.052 real 0m10.122s 00:34:35.052 user 0m8.514s 00:34:35.052 sys 0m1.516s 00:34:35.052 13:18:38 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:35.052 13:18:38 -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 13:18:38 -- nvme/nvme.sh@97 -- # uname 00:34:35.052 13:18:38 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:34:35.052 13:18:38 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:34:35.052 13:18:38 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:34:35.052 13:18:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:35.052 13:18:38 -- common/autotest_common.sh@10 -- # set +x 00:34:35.052 ************************************ 00:34:35.052 START TEST nvme_multi_aen 00:34:35.053 ************************************ 00:34:35.053 13:18:38 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:34:35.311 [2024-04-17 13:18:39.226426] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149649) is not found. Dropping the request. 00:34:35.311 [2024-04-17 13:18:39.226774] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149649) is not found. Dropping the request. 00:34:35.311 [2024-04-17 13:18:39.226923] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149649) is not found. Dropping the request. 00:34:35.311 Child process pid: 149883 00:34:35.570 [Child] Asynchronous Event Request test 00:34:35.570 [Child] Attached to 0000:00:10.0 00:34:35.570 [Child] Registering asynchronous event callbacks... 00:34:35.570 [Child] Getting orig temperature thresholds of all controllers 00:34:35.570 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:35.570 [Child] Waiting for all controllers to trigger AER and reset threshold 00:34:35.570 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:35.570 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:35.570 [Child] Cleaning up... 00:34:35.570 Asynchronous Event Request test 00:34:35.570 Attached to 0000:00:10.0 00:34:35.570 Reset controller to setup AER completions for this process 00:34:35.570 Registering asynchronous event callbacks... 00:34:35.570 Getting orig temperature thresholds of all controllers 00:34:35.570 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:34:35.570 Setting all controllers temperature threshold low to trigger AER 00:34:35.570 Waiting for all controllers temperature threshold to be set lower 00:34:35.570 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:34:35.570 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:34:35.570 Waiting for all controllers to trigger AER and reset threshold 00:34:35.570 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:34:35.570 Cleaning up... 00:34:35.570 ************************************ 00:34:35.570 END TEST nvme_multi_aen 00:34:35.570 ************************************ 00:34:35.570 00:34:35.570 real 0m0.669s 00:34:35.570 user 0m0.273s 00:34:35.570 sys 0m0.240s 00:34:35.570 13:18:39 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:35.570 13:18:39 -- common/autotest_common.sh@10 -- # set +x 00:34:35.570 13:18:39 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:34:35.570 13:18:39 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:34:35.570 13:18:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:35.570 13:18:39 -- common/autotest_common.sh@10 -- # set +x 00:34:35.829 ************************************ 00:34:35.829 START TEST nvme_startup 00:34:35.829 ************************************ 00:34:35.829 13:18:39 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:34:36.101 Initializing NVMe Controllers 00:34:36.101 Attached to 0000:00:10.0 00:34:36.101 Initialization complete. 00:34:36.101 Time used:222076.328 (us). 00:34:36.101 ************************************ 00:34:36.101 END TEST nvme_startup 00:34:36.101 ************************************ 00:34:36.101 00:34:36.101 real 0m0.321s 00:34:36.101 user 0m0.128s 00:34:36.101 sys 0m0.111s 00:34:36.101 13:18:40 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:36.101 13:18:40 -- common/autotest_common.sh@10 -- # set +x 00:34:36.101 13:18:40 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:34:36.101 13:18:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:36.101 13:18:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:36.101 13:18:40 -- common/autotest_common.sh@10 -- # set +x 00:34:36.101 ************************************ 00:34:36.101 START TEST nvme_multi_secondary 00:34:36.101 ************************************ 00:34:36.101 13:18:40 -- common/autotest_common.sh@1099 -- # nvme_multi_secondary 00:34:36.101 13:18:40 -- nvme/nvme.sh@52 -- # pid0=149958 00:34:36.101 13:18:40 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:34:36.101 13:18:40 -- nvme/nvme.sh@54 -- # pid1=149959 00:34:36.102 13:18:40 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:34:36.102 13:18:40 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:34:39.415 Initializing NVMe Controllers 00:34:39.415 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:39.415 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:34:39.415 Initialization complete. Launching workers. 00:34:39.415 ======================================================== 00:34:39.415 Latency(us) 00:34:39.415 Device Information : IOPS MiB/s Average min max 00:34:39.415 PCIE (0000:00:10.0) NSID 1 from core 2: 14293.21 55.83 1118.81 158.97 28698.83 00:34:39.415 ======================================================== 00:34:39.415 Total : 14293.21 55.83 1118.81 158.97 28698.83 00:34:39.415 00:34:39.415 13:18:43 -- nvme/nvme.sh@56 -- # wait 149958 00:34:39.674 Initializing NVMe Controllers 00:34:39.674 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:39.674 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:34:39.674 Initialization complete. Launching workers. 00:34:39.674 ======================================================== 00:34:39.674 Latency(us) 00:34:39.674 Device Information : IOPS MiB/s Average min max 00:34:39.674 PCIE (0000:00:10.0) NSID 1 from core 1: 33301.16 130.08 480.08 155.36 5276.66 00:34:39.674 ======================================================== 00:34:39.674 Total : 33301.16 130.08 480.08 155.36 5276.66 00:34:39.674 00:34:41.601 Initializing NVMe Controllers 00:34:41.601 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:41.601 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:34:41.601 Initialization complete. Launching workers. 00:34:41.601 ======================================================== 00:34:41.601 Latency(us) 00:34:41.601 Device Information : IOPS MiB/s Average min max 00:34:41.601 PCIE (0000:00:10.0) NSID 1 from core 0: 42085.20 164.40 379.82 137.25 4877.07 00:34:41.601 ======================================================== 00:34:41.601 Total : 42085.20 164.40 379.82 137.25 4877.07 00:34:41.601 00:34:41.601 13:18:45 -- nvme/nvme.sh@57 -- # wait 149959 00:34:41.601 13:18:45 -- nvme/nvme.sh@61 -- # pid0=150031 00:34:41.601 13:18:45 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:34:41.601 13:18:45 -- nvme/nvme.sh@63 -- # pid1=150032 00:34:41.601 13:18:45 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:34:41.601 13:18:45 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:34:45.004 Initializing NVMe Controllers 00:34:45.004 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:45.004 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:34:45.004 Initialization complete. Launching workers. 00:34:45.004 ======================================================== 00:34:45.004 Latency(us) 00:34:45.004 Device Information : IOPS MiB/s Average min max 00:34:45.004 PCIE (0000:00:10.0) NSID 1 from core 0: 33124.32 129.39 482.64 143.38 1770.14 00:34:45.004 ======================================================== 00:34:45.004 Total : 33124.32 129.39 482.64 143.38 1770.14 00:34:45.004 00:34:45.263 Initializing NVMe Controllers 00:34:45.263 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:45.264 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:34:45.264 Initialization complete. Launching workers. 00:34:45.264 ======================================================== 00:34:45.264 Latency(us) 00:34:45.264 Device Information : IOPS MiB/s Average min max 00:34:45.264 PCIE (0000:00:10.0) NSID 1 from core 1: 34454.66 134.59 463.99 137.27 1600.90 00:34:45.264 ======================================================== 00:34:45.264 Total : 34454.66 134.59 463.99 137.27 1600.90 00:34:45.264 00:34:47.173 Initializing NVMe Controllers 00:34:47.173 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:34:47.174 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:34:47.174 Initialization complete. Launching workers. 00:34:47.174 ======================================================== 00:34:47.174 Latency(us) 00:34:47.174 Device Information : IOPS MiB/s Average min max 00:34:47.174 PCIE (0000:00:10.0) NSID 1 from core 2: 17616.62 68.81 907.56 145.78 21102.75 00:34:47.174 ======================================================== 00:34:47.174 Total : 17616.62 68.81 907.56 145.78 21102.75 00:34:47.174 00:34:47.174 ************************************ 00:34:47.174 END TEST nvme_multi_secondary 00:34:47.174 ************************************ 00:34:47.174 13:18:50 -- nvme/nvme.sh@65 -- # wait 150031 00:34:47.174 13:18:50 -- nvme/nvme.sh@66 -- # wait 150032 00:34:47.174 00:34:47.174 real 0m10.847s 00:34:47.174 user 0m18.681s 00:34:47.174 sys 0m0.770s 00:34:47.174 13:18:50 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:47.174 13:18:50 -- common/autotest_common.sh@10 -- # set +x 00:34:47.174 13:18:50 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:34:47.174 13:18:50 -- nvme/nvme.sh@102 -- # kill_stub 00:34:47.174 13:18:50 -- common/autotest_common.sh@1063 -- # [[ -e /proc/149144 ]] 00:34:47.174 13:18:50 -- common/autotest_common.sh@1064 -- # kill 149144 00:34:47.174 13:18:50 -- common/autotest_common.sh@1065 -- # wait 149144 00:34:47.174 [2024-04-17 13:18:51.006184] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149882) is not found. Dropping the request. 00:34:47.174 [2024-04-17 13:18:51.007317] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149882) is not found. Dropping the request. 00:34:47.174 [2024-04-17 13:18:51.007772] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149882) is not found. Dropping the request. 00:34:47.174 [2024-04-17 13:18:51.008304] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149882) is not found. Dropping the request. 00:34:47.174 [2024-04-17 13:18:51.267882] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:34:47.174 13:18:51 -- common/autotest_common.sh@1067 -- # rm -f /var/run/spdk_stub0 00:34:47.174 13:18:51 -- common/autotest_common.sh@1071 -- # echo 2 00:34:47.174 13:18:51 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:34:47.174 13:18:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:47.174 13:18:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:47.174 13:18:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.174 ************************************ 00:34:47.174 START TEST bdev_nvme_reset_stuck_adm_cmd 00:34:47.174 ************************************ 00:34:47.174 13:18:51 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:34:47.432 * Looking for test storage... 00:34:47.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:34:47.432 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:34:47.432 13:18:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:47.432 13:18:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:34:47.432 13:18:51 -- common/autotest_common.sh@1499 -- # bdfs=($(get_nvme_bdfs)) 00:34:47.433 13:18:51 -- common/autotest_common.sh@1499 -- # get_nvme_bdfs 00:34:47.433 13:18:51 -- common/autotest_common.sh@1487 -- # bdfs=() 00:34:47.433 13:18:51 -- common/autotest_common.sh@1487 -- # local bdfs 00:34:47.433 13:18:51 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:47.433 13:18:51 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:47.433 13:18:51 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:34:47.433 13:18:51 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:34:47.433 13:18:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:34:47.433 13:18:51 -- common/autotest_common.sh@1501 -- # echo 0000:00:10.0 00:34:47.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=150205 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:47.433 13:18:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 150205 00:34:47.433 13:18:51 -- common/autotest_common.sh@817 -- # '[' -z 150205 ']' 00:34:47.433 13:18:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.433 13:18:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:47.433 13:18:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.433 13:18:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:47.433 13:18:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.433 [2024-04-17 13:18:51.541083] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:34:47.433 [2024-04-17 13:18:51.541440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150205 ] 00:34:47.693 [2024-04-17 13:18:51.746999] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:47.952 [2024-04-17 13:18:51.985119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.952 [2024-04-17 13:18:51.985269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:47.952 [2024-04-17 13:18:51.985403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.952 [2024-04-17 13:18:51.985405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.888 13:18:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:48.888 13:18:52 -- common/autotest_common.sh@850 -- # return 0 00:34:48.888 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:34:48.888 13:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.888 13:18:52 -- common/autotest_common.sh@10 -- # set +x 00:34:48.888 nvme0n1 00:34:48.888 13:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.888 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:34:48.888 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XGEMN.txt 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:34:48.889 13:18:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:48.889 13:18:52 -- common/autotest_common.sh@10 -- # set +x 00:34:48.889 true 00:34:48.889 13:18:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713359932 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=150231 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:34:48.889 13:18:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:50.790 13:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:50.790 13:18:54 -- common/autotest_common.sh@10 -- # set +x 00:34:50.790 [2024-04-17 13:18:54.846212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:50.790 [2024-04-17 13:18:54.846880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:50.790 [2024-04-17 13:18:54.847100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:34:50.790 [2024-04-17 13:18:54.847320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:50.790 [2024-04-17 13:18:54.849326] bdev_nvme.c:2050:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:50.790 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 150231 00:34:50.790 13:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 150231 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 150231 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.790 13:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:50.790 13:18:54 -- common/autotest_common.sh@10 -- # set +x 00:34:50.790 13:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:34:50.790 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XGEMN.txt 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:34:51.049 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XGEMN.txt 00:34:51.050 13:18:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 150205 00:34:51.050 13:18:54 -- common/autotest_common.sh@924 -- # '[' -z 150205 ']' 00:34:51.050 13:18:54 -- common/autotest_common.sh@928 -- # kill -0 150205 00:34:51.050 13:18:54 -- common/autotest_common.sh@929 -- # uname 00:34:51.050 13:18:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:34:51.050 13:18:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 150205 00:34:51.050 killing process with pid 150205 00:34:51.050 13:18:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:34:51.050 13:18:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:34:51.050 13:18:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 150205' 00:34:51.050 13:18:54 -- common/autotest_common.sh@943 -- # kill 150205 00:34:51.050 13:18:54 -- common/autotest_common.sh@948 -- # wait 150205 00:34:53.655 13:18:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:34:53.655 13:18:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:34:53.655 00:34:53.655 real 0m5.860s 00:34:53.655 user 0m20.402s 00:34:53.655 sys 0m0.565s 00:34:53.655 13:18:57 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:53.655 ************************************ 00:34:53.655 END TEST bdev_nvme_reset_stuck_adm_cmd 00:34:53.655 ************************************ 00:34:53.655 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:34:53.655 13:18:57 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:34:53.655 13:18:57 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:34:53.655 13:18:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:53.655 13:18:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:53.655 13:18:57 -- common/autotest_common.sh@10 -- # set +x 00:34:53.655 ************************************ 00:34:53.655 START TEST nvme_fio 00:34:53.655 ************************************ 00:34:53.655 13:18:57 -- common/autotest_common.sh@1099 -- # nvme_fio_test 00:34:53.655 13:18:57 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:34:53.655 13:18:57 -- nvme/nvme.sh@32 -- # ran_fio=false 00:34:53.655 13:18:57 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:34:53.655 13:18:57 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:34:53.655 13:18:57 -- common/autotest_common.sh@1487 -- # bdfs=() 00:34:53.655 13:18:57 -- common/autotest_common.sh@1487 -- # local bdfs 00:34:53.655 13:18:57 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:53.655 13:18:57 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:53.655 13:18:57 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:34:53.655 13:18:57 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:34:53.655 13:18:57 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:34:53.655 13:18:57 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:34:53.655 13:18:57 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:34:53.655 13:18:57 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:34:53.655 13:18:57 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:34:53.655 13:18:57 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:34:53.655 13:18:57 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:34:53.917 13:18:57 -- nvme/nvme.sh@41 -- # bs=4096 00:34:53.917 13:18:57 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:34:53.917 13:18:57 -- common/autotest_common.sh@1334 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:34:53.917 13:18:57 -- common/autotest_common.sh@1311 -- # local fio_dir=/usr/src/fio 00:34:53.917 13:18:57 -- common/autotest_common.sh@1313 -- # sanitizers=(libasan libclang_rt.asan) 00:34:53.917 13:18:57 -- common/autotest_common.sh@1313 -- # local sanitizers 00:34:53.917 13:18:57 -- common/autotest_common.sh@1314 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:53.917 13:18:57 -- common/autotest_common.sh@1315 -- # shift 00:34:53.917 13:18:57 -- common/autotest_common.sh@1317 -- # local asan_lib= 00:34:53.917 13:18:57 -- common/autotest_common.sh@1318 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.917 13:18:57 -- common/autotest_common.sh@1319 -- # grep libasan 00:34:53.917 13:18:57 -- common/autotest_common.sh@1319 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:34:53.917 13:18:57 -- common/autotest_common.sh@1319 -- # awk '{print $3}' 00:34:53.917 13:18:57 -- common/autotest_common.sh@1319 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:34:53.917 13:18:57 -- common/autotest_common.sh@1320 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:34:53.917 13:18:57 -- common/autotest_common.sh@1321 -- # break 00:34:53.917 13:18:57 -- common/autotest_common.sh@1326 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:34:53.917 13:18:57 -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:34:53.917 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:34:53.917 fio-3.35 00:34:53.917 Starting 1 thread 00:34:57.255 00:34:57.255 test: (groupid=0, jobs=1): err= 0: pid=150376: Wed Apr 17 13:19:00 2024 00:34:57.255 read: IOPS=16.3k, BW=63.7MiB/s (66.7MB/s)(127MiB/2001msec) 00:34:57.255 slat (usec): min=4, max=633, avg= 6.40, stdev= 3.97 00:34:57.255 clat (usec): min=280, max=9711, avg=3899.17, stdev=767.19 00:34:57.255 lat (usec): min=285, max=9716, avg=3905.57, stdev=768.23 00:34:57.255 clat percentiles (usec): 00:34:57.255 | 1.00th=[ 2999], 5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3490], 00:34:57.255 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:34:57.255 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4359], 95.00th=[ 5080], 00:34:57.255 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 8717], 99.95th=[ 9110], 00:34:57.255 | 99.99th=[ 9503] 00:34:57.255 bw ( KiB/s): min=58736, max=68344, per=96.75%, avg=63066.67, stdev=4873.45, samples=3 00:34:57.255 iops : min=14684, max=17086, avg=15766.67, stdev=1218.36, samples=3 00:34:57.255 write: IOPS=16.3k, BW=63.8MiB/s (66.9MB/s)(128MiB/2001msec); 0 zone resets 00:34:57.255 slat (nsec): min=4432, max=48287, avg=6632.57, stdev=1900.93 00:34:57.255 clat (usec): min=236, max=9766, avg=3913.75, stdev=775.05 00:34:57.255 lat (usec): min=242, max=9771, avg=3920.38, stdev=776.11 00:34:57.255 clat percentiles (usec): 00:34:57.256 | 1.00th=[ 3032], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3490], 00:34:57.256 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:34:57.256 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 5145], 00:34:57.256 | 99.00th=[ 7570], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[ 9372], 00:34:57.256 | 99.99th=[ 9503] 00:34:57.256 bw ( KiB/s): min=59032, max=67616, per=96.00%, avg=62722.67, stdev=4416.57, samples=3 00:34:57.256 iops : min=14758, max=16904, avg=15680.67, stdev=1104.14, samples=3 00:34:57.256 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:34:57.256 lat (msec) : 2=0.13%, 4=74.73%, 10=25.10% 00:34:57.256 cpu : usr=99.90%, sys=0.00%, ctx=4, majf=0, minf=37 00:34:57.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:34:57.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:57.256 issued rwts: total=32607,32685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:57.256 00:34:57.256 Run status group 0 (all jobs): 00:34:57.256 READ: bw=63.7MiB/s (66.7MB/s), 63.7MiB/s-63.7MiB/s (66.7MB/s-66.7MB/s), io=127MiB (134MB), run=2001-2001msec 00:34:57.256 WRITE: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=128MiB (134MB), run=2001-2001msec 00:34:57.256 ----------------------------------------------------- 00:34:57.256 Suppressions used: 00:34:57.256 count bytes template 00:34:57.256 1 32 /usr/src/fio/parse.c 00:34:57.256 ----------------------------------------------------- 00:34:57.256 00:34:57.256 ************************************ 00:34:57.256 END TEST nvme_fio 00:34:57.256 ************************************ 00:34:57.256 13:19:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:34:57.256 13:19:01 -- nvme/nvme.sh@46 -- # true 00:34:57.256 00:34:57.256 real 0m4.117s 00:34:57.256 user 0m3.369s 00:34:57.256 sys 0m0.404s 00:34:57.256 13:19:01 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:57.256 13:19:01 -- common/autotest_common.sh@10 -- # set +x 00:34:57.515 ************************************ 00:34:57.515 END TEST nvme 00:34:57.515 ************************************ 00:34:57.515 00:34:57.515 real 0m47.570s 00:34:57.515 user 2m7.295s 00:34:57.515 sys 0m8.298s 00:34:57.515 13:19:01 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:34:57.515 13:19:01 -- common/autotest_common.sh@10 -- # set +x 00:34:57.515 13:19:01 -- spdk/autotest.sh@212 -- # [[ 0 -eq 1 ]] 00:34:57.515 13:19:01 -- spdk/autotest.sh@216 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:57.515 13:19:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:34:57.515 13:19:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:57.515 13:19:01 -- common/autotest_common.sh@10 -- # set +x 00:34:57.515 ************************************ 00:34:57.515 START TEST nvme_scc 00:34:57.515 ************************************ 00:34:57.515 13:19:01 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:34:57.515 * Looking for test storage... 00:34:57.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:34:57.515 13:19:01 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:57.515 13:19:01 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:34:57.515 13:19:01 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:34:57.515 13:19:01 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:57.515 13:19:01 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:57.515 13:19:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:57.515 13:19:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.515 13:19:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.515 13:19:01 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:57.515 13:19:01 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:57.515 13:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:57.515 13:19:01 -- paths/export.sh@5 -- # export PATH 00:34:57.515 13:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:34:57.515 13:19:01 -- nvme/functions.sh@10 -- # ctrls=() 00:34:57.515 13:19:01 -- nvme/functions.sh@10 -- # declare -A ctrls 00:34:57.515 13:19:01 -- nvme/functions.sh@11 -- # nvmes=() 00:34:57.515 13:19:01 -- nvme/functions.sh@11 -- # declare -A nvmes 00:34:57.515 13:19:01 -- nvme/functions.sh@12 -- # bdfs=() 00:34:57.515 13:19:01 -- nvme/functions.sh@12 -- # declare -A bdfs 00:34:57.515 13:19:01 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:34:57.515 13:19:01 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:34:57.515 13:19:01 -- nvme/functions.sh@14 -- # nvme_name= 00:34:57.515 13:19:01 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:57.515 13:19:01 -- nvme/nvme_scc.sh@12 -- # uname 00:34:57.515 13:19:01 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:34:57.515 13:19:01 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:34:57.515 13:19:01 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:57.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:57.774 Waiting for block devices as requested 00:34:58.036 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:58.036 13:19:02 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:34:58.036 13:19:02 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:34:58.036 13:19:02 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:34:58.036 13:19:02 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:34:58.036 13:19:02 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:34:58.036 13:19:02 -- scripts/common.sh@15 -- # local i 00:34:58.036 13:19:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:34:58.036 13:19:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:34:58.036 13:19:02 -- scripts/common.sh@24 -- # return 0 00:34:58.036 13:19:02 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:34:58.036 13:19:02 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:34:58.036 13:19:02 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@18 -- # shift 00:34:58.036 13:19:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.036 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:34:58.036 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.036 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.037 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:34:58.037 13:19:02 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:34:58.037 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.038 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:34:58.038 13:19:02 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.038 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:34:58.039 13:19:02 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:34:58.039 13:19:02 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:34:58.039 13:19:02 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:34:58.039 13:19:02 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@18 -- # shift 00:34:58.039 13:19:02 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.039 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:34:58.039 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:34:58.039 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.040 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:34:58.040 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:34:58.040 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.041 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.041 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:34:58.041 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:34:58.041 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:34:58.041 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.041 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.041 13:19:02 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:34:58.041 13:19:02 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:34:58.041 13:19:02 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:34:58.041 13:19:02 -- nvme/functions.sh@21 -- # IFS=: 00:34:58.041 13:19:02 -- nvme/functions.sh@21 -- # read -r reg val 00:34:58.041 13:19:02 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:34:58.041 13:19:02 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:34:58.041 13:19:02 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:34:58.041 13:19:02 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:34:58.041 13:19:02 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:34:58.041 13:19:02 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:34:58.041 13:19:02 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:34:58.041 13:19:02 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:34:58.041 13:19:02 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:34:58.041 13:19:02 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:34:58.041 13:19:02 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:34:58.041 13:19:02 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:34:58.041 13:19:02 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:34:58.041 13:19:02 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:34:58.041 13:19:02 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:34:58.041 13:19:02 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:34:58.041 13:19:02 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:34:58.041 13:19:02 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:34:58.041 13:19:02 -- nvme/functions.sh@76 -- # echo 0x15d 00:34:58.041 13:19:02 -- nvme/functions.sh@184 -- # oncs=0x15d 00:34:58.041 13:19:02 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:34:58.041 13:19:02 -- nvme/functions.sh@197 -- # echo nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:34:58.041 13:19:02 -- nvme/functions.sh@206 -- # echo nvme0 00:34:58.041 13:19:02 -- nvme/functions.sh@207 -- # return 0 00:34:58.041 13:19:02 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:34:58.041 13:19:02 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:34:58.041 13:19:02 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:58.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:58.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:59.546 13:19:03 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:34:59.546 13:19:03 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:34:59.546 13:19:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:34:59.546 13:19:03 -- common/autotest_common.sh@10 -- # set +x 00:34:59.805 ************************************ 00:34:59.805 START TEST nvme_simple_copy 00:34:59.805 ************************************ 00:34:59.805 13:19:03 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:35:00.064 Initializing NVMe Controllers 00:35:00.064 Attaching to 0000:00:10.0 00:35:00.064 Controller supports SCC. Attached to 0000:00:10.0 00:35:00.064 Namespace ID: 1 size: 5GB 00:35:00.064 Initialization complete. 00:35:00.064 00:35:00.064 Controller QEMU NVMe Ctrl (12340 ) 00:35:00.064 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:35:00.064 Namespace Block Size:4096 00:35:00.064 Writing LBAs 0 to 63 with Random Data 00:35:00.064 Copied LBAs from 0 - 63 to the Destination LBA 256 00:35:00.064 LBAs matching Written Data: 64 00:35:00.064 00:35:00.064 real 0m0.290s 00:35:00.064 user 0m0.118s 00:35:00.064 sys 0m0.072s 00:35:00.064 13:19:04 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:00.064 13:19:04 -- common/autotest_common.sh@10 -- # set +x 00:35:00.064 ************************************ 00:35:00.064 END TEST nvme_simple_copy 00:35:00.064 ************************************ 00:35:00.064 00:35:00.064 real 0m2.556s 00:35:00.064 user 0m0.774s 00:35:00.064 sys 0m1.597s 00:35:00.064 13:19:04 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:00.064 13:19:04 -- common/autotest_common.sh@10 -- # set +x 00:35:00.064 ************************************ 00:35:00.064 END TEST nvme_scc 00:35:00.064 ************************************ 00:35:00.064 13:19:04 -- spdk/autotest.sh@218 -- # [[ 0 -eq 1 ]] 00:35:00.065 13:19:04 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:35:00.065 13:19:04 -- spdk/autotest.sh@224 -- # [[ '' -eq 1 ]] 00:35:00.065 13:19:04 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:35:00.065 13:19:04 -- spdk/autotest.sh@231 -- # [[ '' -eq 1 ]] 00:35:00.065 13:19:04 -- spdk/autotest.sh@235 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:00.065 13:19:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:35:00.065 13:19:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:00.065 13:19:04 -- common/autotest_common.sh@10 -- # set +x 00:35:00.065 ************************************ 00:35:00.065 START TEST nvme_rpc 00:35:00.065 ************************************ 00:35:00.065 13:19:04 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:00.065 * Looking for test storage... 00:35:00.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:00.065 13:19:04 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:00.065 13:19:04 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:35:00.065 13:19:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:00.065 13:19:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:35:00.065 13:19:04 -- common/autotest_common.sh@1499 -- # bdfs=($(get_nvme_bdfs)) 00:35:00.065 13:19:04 -- common/autotest_common.sh@1499 -- # get_nvme_bdfs 00:35:00.065 13:19:04 -- common/autotest_common.sh@1487 -- # bdfs=() 00:35:00.065 13:19:04 -- common/autotest_common.sh@1487 -- # local bdfs 00:35:00.065 13:19:04 -- common/autotest_common.sh@1488 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:00.065 13:19:04 -- common/autotest_common.sh@1488 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:00.065 13:19:04 -- common/autotest_common.sh@1488 -- # jq -r '.config[].params.traddr' 00:35:00.340 13:19:04 -- common/autotest_common.sh@1489 -- # (( 1 == 0 )) 00:35:00.340 13:19:04 -- common/autotest_common.sh@1493 -- # printf '%s\n' 0000:00:10.0 00:35:00.340 13:19:04 -- common/autotest_common.sh@1501 -- # echo 0000:00:10.0 00:35:00.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.340 13:19:04 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:35:00.340 13:19:04 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=150882 00:35:00.340 13:19:04 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:35:00.340 13:19:04 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:35:00.340 13:19:04 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 150882 00:35:00.340 13:19:04 -- common/autotest_common.sh@817 -- # '[' -z 150882 ']' 00:35:00.340 13:19:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.340 13:19:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:00.340 13:19:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.340 13:19:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:00.340 13:19:04 -- common/autotest_common.sh@10 -- # set +x 00:35:00.340 [2024-04-17 13:19:04.344605] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:00.340 [2024-04-17 13:19:04.344812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150882 ] 00:35:00.599 [2024-04-17 13:19:04.522309] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:00.599 [2024-04-17 13:19:04.745520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.599 [2024-04-17 13:19:04.745537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.532 13:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:01.532 13:19:05 -- common/autotest_common.sh@850 -- # return 0 00:35:01.532 13:19:05 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:01.791 Nvme0n1 00:35:01.791 13:19:05 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:35:01.791 13:19:05 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:35:02.049 request: 00:35:02.049 { 00:35:02.049 "filename": "non_existing_file", 00:35:02.049 "bdev_name": "Nvme0n1", 00:35:02.049 "method": "bdev_nvme_apply_firmware", 00:35:02.049 "req_id": 1 00:35:02.049 } 00:35:02.049 Got JSON-RPC error response 00:35:02.049 response: 00:35:02.049 { 00:35:02.049 "code": -32603, 00:35:02.049 "message": "open file failed." 00:35:02.049 } 00:35:02.049 13:19:06 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:35:02.049 13:19:06 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:35:02.049 13:19:06 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:02.308 13:19:06 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:35:02.308 13:19:06 -- nvme/nvme_rpc.sh@40 -- # killprocess 150882 00:35:02.308 13:19:06 -- common/autotest_common.sh@924 -- # '[' -z 150882 ']' 00:35:02.308 13:19:06 -- common/autotest_common.sh@928 -- # kill -0 150882 00:35:02.308 13:19:06 -- common/autotest_common.sh@929 -- # uname 00:35:02.308 13:19:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:35:02.308 13:19:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 150882 00:35:02.308 killing process with pid 150882 00:35:02.308 13:19:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:35:02.308 13:19:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:35:02.308 13:19:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 150882' 00:35:02.308 13:19:06 -- common/autotest_common.sh@943 -- # kill 150882 00:35:02.308 13:19:06 -- common/autotest_common.sh@948 -- # wait 150882 00:35:04.842 00:35:04.842 real 0m4.361s 00:35:04.842 user 0m8.461s 00:35:04.842 sys 0m0.548s 00:35:04.842 13:19:08 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:04.842 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:35:04.842 ************************************ 00:35:04.842 END TEST nvme_rpc 00:35:04.842 ************************************ 00:35:04.842 13:19:08 -- spdk/autotest.sh@236 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:35:04.842 13:19:08 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:35:04.842 13:19:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:04.842 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:35:04.842 ************************************ 00:35:04.842 START TEST nvme_rpc_timeouts 00:35:04.842 ************************************ 00:35:04.842 13:19:08 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:35:04.842 * Looking for test storage... 00:35:04.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_150974 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_150974 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=151001 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:35:04.842 13:19:08 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 151001 00:35:04.842 13:19:08 -- common/autotest_common.sh@817 -- # '[' -z 151001 ']' 00:35:04.842 13:19:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.842 13:19:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:04.842 13:19:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.842 13:19:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:04.842 13:19:08 -- common/autotest_common.sh@10 -- # set +x 00:35:04.842 [2024-04-17 13:19:08.749000] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:04.842 [2024-04-17 13:19:08.749221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151001 ] 00:35:04.842 [2024-04-17 13:19:08.927051] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:05.102 [2024-04-17 13:19:09.188350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.102 [2024-04-17 13:19:09.188363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.038 13:19:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:06.038 13:19:10 -- common/autotest_common.sh@850 -- # return 0 00:35:06.038 Checking default timeout settings: 00:35:06.038 13:19:10 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:35:06.038 13:19:10 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:06.297 Making settings changes with rpc: 00:35:06.297 13:19:10 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:35:06.297 13:19:10 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:35:06.874 Check default vs. modified settings: 00:35:06.874 13:19:10 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:35:06.874 13:19:10 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:35:07.132 Setting action_on_timeout is changed as expected. 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:35:07.132 Setting timeout_us is changed as expected. 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:35:07.132 Setting timeout_admin_us is changed as expected. 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_150974 /tmp/settings_modified_150974 00:35:07.132 13:19:11 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 151001 00:35:07.132 13:19:11 -- common/autotest_common.sh@924 -- # '[' -z 151001 ']' 00:35:07.132 13:19:11 -- common/autotest_common.sh@928 -- # kill -0 151001 00:35:07.132 13:19:11 -- common/autotest_common.sh@929 -- # uname 00:35:07.132 13:19:11 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:35:07.132 13:19:11 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 151001 00:35:07.132 13:19:11 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:35:07.132 13:19:11 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:35:07.132 killing process with pid 151001 00:35:07.132 13:19:11 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 151001' 00:35:07.132 13:19:11 -- common/autotest_common.sh@943 -- # kill 151001 00:35:07.132 13:19:11 -- common/autotest_common.sh@948 -- # wait 151001 00:35:09.667 RPC TIMEOUT SETTING TEST PASSED. 00:35:09.667 13:19:13 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:35:09.667 00:35:09.667 real 0m4.825s 00:35:09.667 user 0m9.369s 00:35:09.667 sys 0m0.641s 00:35:09.667 13:19:13 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:09.667 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:35:09.667 ************************************ 00:35:09.667 END TEST nvme_rpc_timeouts 00:35:09.667 ************************************ 00:35:09.667 13:19:13 -- spdk/autotest.sh@240 -- # '[' 1 -eq 0 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@244 -- # [[ 0 -eq 1 ]] 00:35:09.667 13:19:13 -- spdk/autotest.sh@253 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@257 -- # timing_exit lib 00:35:09.667 13:19:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:09.667 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:35:09.667 13:19:13 -- spdk/autotest.sh@259 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@305 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@309 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@313 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@340 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@344 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:35:09.667 13:19:13 -- spdk/autotest.sh@360 -- # [[ 0 -eq 1 ]] 00:35:09.667 13:19:13 -- spdk/autotest.sh@364 -- # [[ 0 -eq 1 ]] 00:35:09.667 13:19:13 -- spdk/autotest.sh@368 -- # [[ 0 -eq 1 ]] 00:35:09.667 13:19:13 -- spdk/autotest.sh@372 -- # [[ 1 -eq 1 ]] 00:35:09.667 13:19:13 -- spdk/autotest.sh@373 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:09.667 13:19:13 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:35:09.667 13:19:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:09.667 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:35:09.667 ************************************ 00:35:09.667 START TEST blockdev_raid5f 00:35:09.667 ************************************ 00:35:09.667 13:19:13 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:09.667 * Looking for test storage... 00:35:09.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:09.667 13:19:13 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:09.667 13:19:13 -- bdev/nbd_common.sh@6 -- # set -e 00:35:09.667 13:19:13 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:09.667 13:19:13 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:09.667 13:19:13 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:09.667 13:19:13 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:09.667 13:19:13 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:09.667 13:19:13 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:09.667 13:19:13 -- bdev/blockdev.sh@20 -- # : 00:35:09.667 13:19:13 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:35:09.667 13:19:13 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:35:09.667 13:19:13 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:35:09.667 13:19:13 -- bdev/blockdev.sh@674 -- # uname -s 00:35:09.667 13:19:13 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:35:09.667 13:19:13 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:35:09.667 13:19:13 -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:35:09.667 13:19:13 -- bdev/blockdev.sh@683 -- # crypto_device= 00:35:09.667 13:19:13 -- bdev/blockdev.sh@684 -- # dek= 00:35:09.667 13:19:13 -- bdev/blockdev.sh@685 -- # env_ctx= 00:35:09.667 13:19:13 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:35:09.667 13:19:13 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:35:09.667 13:19:13 -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:35:09.667 13:19:13 -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:35:09.667 13:19:13 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:35:09.667 13:19:13 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=151186 00:35:09.667 13:19:13 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:09.667 13:19:13 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:09.667 13:19:13 -- bdev/blockdev.sh@49 -- # waitforlisten 151186 00:35:09.667 13:19:13 -- common/autotest_common.sh@817 -- # '[' -z 151186 ']' 00:35:09.667 13:19:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.667 13:19:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:09.667 13:19:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.667 13:19:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:09.667 13:19:13 -- common/autotest_common.sh@10 -- # set +x 00:35:09.667 [2024-04-17 13:19:13.659881] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:09.667 [2024-04-17 13:19:13.660063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151186 ] 00:35:09.926 [2024-04-17 13:19:13.832480] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.926 [2024-04-17 13:19:14.059231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.860 13:19:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:10.860 13:19:14 -- common/autotest_common.sh@850 -- # return 0 00:35:10.860 13:19:14 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:35:10.860 13:19:14 -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:35:10.860 13:19:14 -- bdev/blockdev.sh@280 -- # rpc_cmd 00:35:10.860 13:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.860 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:35:10.860 Malloc0 00:35:10.860 Malloc1 00:35:10.860 Malloc2 00:35:10.860 13:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.860 13:19:14 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:35:10.860 13:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.860 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:35:10.860 13:19:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.860 13:19:14 -- bdev/blockdev.sh@740 -- # cat 00:35:10.860 13:19:14 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:35:10.860 13:19:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.860 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 13:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.119 13:19:15 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:35:11.119 13:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.119 13:19:15 -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 13:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.119 13:19:15 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:11.119 13:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.119 13:19:15 -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 13:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.119 13:19:15 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:35:11.119 13:19:15 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:35:11.119 13:19:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.119 13:19:15 -- common/autotest_common.sh@10 -- # set +x 00:35:11.119 13:19:15 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:35:11.119 13:19:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.119 13:19:15 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:35:11.119 13:19:15 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5e53b383-af51-4ffd-b207-2db1e008cbd1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5e53b383-af51-4ffd-b207-2db1e008cbd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5e53b383-af51-4ffd-b207-2db1e008cbd1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "de43396f-5427-4640-833b-61d46a71840a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ce313051-7f73-45ae-81f9-e99c92b5c21b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63635f5e-9b24-4801-b9af-5749f56bcab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:11.119 13:19:15 -- bdev/blockdev.sh@749 -- # jq -r .name 00:35:11.119 13:19:15 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:35:11.119 13:19:15 -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:35:11.119 13:19:15 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:35:11.119 13:19:15 -- bdev/blockdev.sh@754 -- # killprocess 151186 00:35:11.119 13:19:15 -- common/autotest_common.sh@924 -- # '[' -z 151186 ']' 00:35:11.119 13:19:15 -- common/autotest_common.sh@928 -- # kill -0 151186 00:35:11.119 13:19:15 -- common/autotest_common.sh@929 -- # uname 00:35:11.119 13:19:15 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:35:11.119 13:19:15 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 151186 00:35:11.119 killing process with pid 151186 00:35:11.119 13:19:15 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:35:11.119 13:19:15 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:35:11.119 13:19:15 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 151186' 00:35:11.119 13:19:15 -- common/autotest_common.sh@943 -- # kill 151186 00:35:11.119 13:19:15 -- common/autotest_common.sh@948 -- # wait 151186 00:35:13.690 13:19:17 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:13.690 13:19:17 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:13.690 13:19:17 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:35:13.690 13:19:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:13.690 13:19:17 -- common/autotest_common.sh@10 -- # set +x 00:35:13.690 ************************************ 00:35:13.690 START TEST bdev_hello_world 00:35:13.690 ************************************ 00:35:13.690 13:19:17 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:13.690 [2024-04-17 13:19:17.792208] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:13.690 [2024-04-17 13:19:17.792461] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151262 ] 00:35:13.949 [2024-04-17 13:19:17.978012] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.208 [2024-04-17 13:19:18.192561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.776 [2024-04-17 13:19:18.697729] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:14.776 [2024-04-17 13:19:18.697821] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:35:14.776 [2024-04-17 13:19:18.697881] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:14.776 [2024-04-17 13:19:18.698504] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:14.776 [2024-04-17 13:19:18.698663] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:14.776 [2024-04-17 13:19:18.698695] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:14.776 [2024-04-17 13:19:18.698786] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:14.776 00:35:14.776 [2024-04-17 13:19:18.698820] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:16.202 ************************************ 00:35:16.202 END TEST bdev_hello_world 00:35:16.202 ************************************ 00:35:16.202 00:35:16.202 real 0m2.334s 00:35:16.202 user 0m1.951s 00:35:16.202 sys 0m0.273s 00:35:16.202 13:19:20 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:16.202 13:19:20 -- common/autotest_common.sh@10 -- # set +x 00:35:16.202 13:19:20 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:35:16.202 13:19:20 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:35:16.202 13:19:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:16.202 13:19:20 -- common/autotest_common.sh@10 -- # set +x 00:35:16.202 ************************************ 00:35:16.202 START TEST bdev_bounds 00:35:16.202 ************************************ 00:35:16.202 13:19:20 -- common/autotest_common.sh@1099 -- # bdev_bounds '' 00:35:16.202 13:19:20 -- bdev/blockdev.sh@290 -- # bdevio_pid=151328 00:35:16.202 13:19:20 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:16.202 Process bdevio pid: 151328 00:35:16.202 13:19:20 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 151328' 00:35:16.202 13:19:20 -- bdev/blockdev.sh@293 -- # waitforlisten 151328 00:35:16.202 13:19:20 -- common/autotest_common.sh@817 -- # '[' -z 151328 ']' 00:35:16.202 13:19:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.202 13:19:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:16.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.202 13:19:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.202 13:19:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:16.202 13:19:20 -- common/autotest_common.sh@10 -- # set +x 00:35:16.202 13:19:20 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:16.202 [2024-04-17 13:19:20.183068] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:16.202 [2024-04-17 13:19:20.183566] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151328 ] 00:35:16.460 [2024-04-17 13:19:20.357922] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:16.719 [2024-04-17 13:19:20.614069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.719 [2024-04-17 13:19:20.614209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.719 [2024-04-17 13:19:20.614209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.287 13:19:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:17.287 13:19:21 -- common/autotest_common.sh@850 -- # return 0 00:35:17.287 13:19:21 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:17.287 I/O targets: 00:35:17.287 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:35:17.287 00:35:17.287 00:35:17.287 CUnit - A unit testing framework for C - Version 2.1-3 00:35:17.287 http://cunit.sourceforge.net/ 00:35:17.287 00:35:17.287 00:35:17.287 Suite: bdevio tests on: raid5f 00:35:17.287 Test: blockdev write read block ...passed 00:35:17.287 Test: blockdev write zeroes read block ...passed 00:35:17.287 Test: blockdev write zeroes read no split ...passed 00:35:17.287 Test: blockdev write zeroes read split ...passed 00:35:17.546 Test: blockdev write zeroes read split partial ...passed 00:35:17.546 Test: blockdev reset ...passed 00:35:17.546 Test: blockdev write read 8 blocks ...passed 00:35:17.546 Test: blockdev write read size > 128k ...passed 00:35:17.546 Test: blockdev write read invalid size ...passed 00:35:17.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:17.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:17.546 Test: blockdev write read max offset ...passed 00:35:17.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:17.546 Test: blockdev writev readv 8 blocks ...passed 00:35:17.546 Test: blockdev writev readv 30 x 1block ...passed 00:35:17.546 Test: blockdev writev readv block ...passed 00:35:17.546 Test: blockdev writev readv size > 128k ...passed 00:35:17.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:17.546 Test: blockdev comparev and writev ...passed 00:35:17.546 Test: blockdev nvme passthru rw ...passed 00:35:17.546 Test: blockdev nvme passthru vendor specific ...passed 00:35:17.546 Test: blockdev nvme admin passthru ...passed 00:35:17.546 Test: blockdev copy ...passed 00:35:17.546 00:35:17.546 Run Summary: Type Total Ran Passed Failed Inactive 00:35:17.546 suites 1 1 n/a 0 0 00:35:17.546 tests 23 23 23 0 0 00:35:17.546 asserts 130 130 130 0 n/a 00:35:17.546 00:35:17.546 Elapsed time = 0.523 seconds 00:35:17.546 0 00:35:17.546 13:19:21 -- bdev/blockdev.sh@295 -- # killprocess 151328 00:35:17.546 13:19:21 -- common/autotest_common.sh@924 -- # '[' -z 151328 ']' 00:35:17.546 13:19:21 -- common/autotest_common.sh@928 -- # kill -0 151328 00:35:17.546 13:19:21 -- common/autotest_common.sh@929 -- # uname 00:35:17.546 13:19:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:35:17.546 13:19:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 151328 00:35:17.546 13:19:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:35:17.546 killing process with pid 151328 00:35:17.546 13:19:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:35:17.546 13:19:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 151328' 00:35:17.546 13:19:21 -- common/autotest_common.sh@943 -- # kill 151328 00:35:17.546 13:19:21 -- common/autotest_common.sh@948 -- # wait 151328 00:35:18.921 13:19:22 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:35:18.921 00:35:18.921 real 0m2.783s 00:35:18.921 user 0m6.520s 00:35:18.921 sys 0m0.393s 00:35:18.921 ************************************ 00:35:18.921 END TEST bdev_bounds 00:35:18.921 ************************************ 00:35:18.921 13:19:22 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:18.921 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:35:18.921 13:19:22 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:18.921 13:19:22 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:35:18.921 13:19:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:18.921 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:35:18.921 ************************************ 00:35:18.921 START TEST bdev_nbd 00:35:18.921 ************************************ 00:35:18.921 13:19:22 -- common/autotest_common.sh@1099 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:18.921 13:19:22 -- bdev/blockdev.sh@300 -- # uname -s 00:35:18.921 13:19:22 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:35:18.921 13:19:22 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:18.921 13:19:22 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:18.921 13:19:22 -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:35:18.921 13:19:22 -- bdev/blockdev.sh@304 -- # local bdev_all 00:35:18.921 13:19:22 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:35:18.921 13:19:22 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:35:18.921 13:19:22 -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:35:18.921 13:19:22 -- bdev/blockdev.sh@311 -- # local nbd_all 00:35:18.921 13:19:22 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:35:18.921 13:19:22 -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:35:18.921 13:19:22 -- bdev/blockdev.sh@314 -- # local nbd_list 00:35:18.921 13:19:22 -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:35:18.921 13:19:22 -- bdev/blockdev.sh@315 -- # local bdev_list 00:35:18.921 13:19:22 -- bdev/blockdev.sh@318 -- # nbd_pid=151401 00:35:18.921 13:19:22 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:18.921 13:19:22 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:18.921 13:19:22 -- bdev/blockdev.sh@320 -- # waitforlisten 151401 /var/tmp/spdk-nbd.sock 00:35:18.921 13:19:22 -- common/autotest_common.sh@817 -- # '[' -z 151401 ']' 00:35:18.921 13:19:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:18.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:18.921 13:19:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:18.921 13:19:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:18.921 13:19:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:18.921 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:35:18.921 [2024-04-17 13:19:23.056729] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:18.921 [2024-04-17 13:19:23.056898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.180 [2024-04-17 13:19:23.215917] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.439 [2024-04-17 13:19:23.469607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.021 13:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:20.021 13:19:24 -- common/autotest_common.sh@850 -- # return 0 00:35:20.021 13:19:24 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:35:20.021 13:19:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@24 -- # local i 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:20.022 13:19:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:20.328 13:19:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:20.328 13:19:24 -- common/autotest_common.sh@855 -- # local i 00:35:20.328 13:19:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:20.328 13:19:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:20.328 13:19:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:20.328 13:19:24 -- common/autotest_common.sh@859 -- # break 00:35:20.328 13:19:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:20.328 13:19:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:20.328 13:19:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:20.328 1+0 records in 00:35:20.328 1+0 records out 00:35:20.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360365 s, 11.4 MB/s 00:35:20.328 13:19:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:20.328 13:19:24 -- common/autotest_common.sh@872 -- # size=4096 00:35:20.328 13:19:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:20.328 13:19:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:20.328 13:19:24 -- common/autotest_common.sh@875 -- # return 0 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:20.328 13:19:24 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:20.587 { 00:35:20.587 "nbd_device": "/dev/nbd0", 00:35:20.587 "bdev_name": "raid5f" 00:35:20.587 } 00:35:20.587 ]' 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:20.587 { 00:35:20.587 "nbd_device": "/dev/nbd0", 00:35:20.587 "bdev_name": "raid5f" 00:35:20.587 } 00:35:20.587 ]' 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@51 -- # local i 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:20.587 13:19:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@41 -- # break 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@45 -- # return 0 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:20.846 13:19:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:21.104 13:19:25 -- bdev/nbd_common.sh@65 -- # true 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@65 -- # count=0 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@122 -- # count=0 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@127 -- # return 0 00:35:21.105 13:19:25 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@12 -- # local i 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:21.105 13:19:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:35:21.364 /dev/nbd0 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:21.364 13:19:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:35:21.364 13:19:25 -- common/autotest_common.sh@855 -- # local i 00:35:21.364 13:19:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:35:21.364 13:19:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:35:21.364 13:19:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:35:21.364 13:19:25 -- common/autotest_common.sh@859 -- # break 00:35:21.364 13:19:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:35:21.364 13:19:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:35:21.364 13:19:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:21.364 1+0 records in 00:35:21.364 1+0 records out 00:35:21.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312347 s, 13.1 MB/s 00:35:21.364 13:19:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:21.364 13:19:25 -- common/autotest_common.sh@872 -- # size=4096 00:35:21.364 13:19:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:21.364 13:19:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:35:21.364 13:19:25 -- common/autotest_common.sh@875 -- # return 0 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:21.364 13:19:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:21.622 13:19:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:21.622 { 00:35:21.622 "nbd_device": "/dev/nbd0", 00:35:21.622 "bdev_name": "raid5f" 00:35:21.622 } 00:35:21.622 ]' 00:35:21.622 13:19:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:21.622 { 00:35:21.622 "nbd_device": "/dev/nbd0", 00:35:21.623 "bdev_name": "raid5f" 00:35:21.623 } 00:35:21.623 ]' 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@65 -- # count=1 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@66 -- # echo 1 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@95 -- # count=1 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:21.623 13:19:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:21.881 256+0 records in 00:35:21.881 256+0 records out 00:35:21.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614901 s, 171 MB/s 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:21.881 256+0 records in 00:35:21.881 256+0 records out 00:35:21.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338798 s, 30.9 MB/s 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@51 -- # local i 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:21.881 13:19:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@41 -- # break 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@45 -- # return 0 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:22.140 13:19:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:22.141 13:19:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@65 -- # true 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@65 -- # count=0 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@104 -- # count=0 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@109 -- # return 0 00:35:22.399 13:19:26 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:35:22.399 13:19:26 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:22.658 malloc_lvol_verify 00:35:22.658 13:19:26 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:22.917 59a33c69-dd30-4d8b-9480-f8157ae53d29 00:35:22.917 13:19:27 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:23.176 4f5d731e-ea8b-4fab-a3e5-2da71dc86f6e 00:35:23.176 13:19:27 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:23.435 /dev/nbd0 00:35:23.435 13:19:27 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:35:23.435 mke2fs 1.45.5 (07-Jan-2020) 00:35:23.435 00:35:23.435 Filesystem too small for a journal 00:35:23.435 Creating filesystem with 1024 4k blocks and 1024 inodes 00:35:23.435 00:35:23.435 Allocating group tables: 0/1 done 00:35:23.435 Writing inode tables: 0/1 done 00:35:23.436 Writing superblocks and filesystem accounting information: 0/1 done 00:35:23.436 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@51 -- # local i 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:23.436 13:19:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@41 -- # break 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@45 -- # return 0 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:35:24.003 13:19:27 -- bdev/nbd_common.sh@147 -- # return 0 00:35:24.003 13:19:27 -- bdev/blockdev.sh@326 -- # killprocess 151401 00:35:24.003 13:19:27 -- common/autotest_common.sh@924 -- # '[' -z 151401 ']' 00:35:24.003 13:19:27 -- common/autotest_common.sh@928 -- # kill -0 151401 00:35:24.003 13:19:27 -- common/autotest_common.sh@929 -- # uname 00:35:24.003 13:19:27 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:35:24.003 13:19:27 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 151401 00:35:24.003 13:19:27 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:35:24.003 killing process with pid 151401 00:35:24.003 13:19:27 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:35:24.003 13:19:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 151401' 00:35:24.003 13:19:27 -- common/autotest_common.sh@943 -- # kill 151401 00:35:24.003 13:19:27 -- common/autotest_common.sh@948 -- # wait 151401 00:35:25.380 ************************************ 00:35:25.380 END TEST bdev_nbd 00:35:25.380 ************************************ 00:35:25.380 13:19:29 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:35:25.380 00:35:25.380 real 0m6.377s 00:35:25.380 user 0m9.228s 00:35:25.380 sys 0m1.209s 00:35:25.380 13:19:29 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:25.380 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:35:25.380 13:19:29 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:35:25.380 13:19:29 -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:35:25.380 13:19:29 -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:35:25.380 13:19:29 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:25.380 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:35:25.380 ************************************ 00:35:25.380 START TEST bdev_fio 00:35:25.380 ************************************ 00:35:25.380 13:19:29 -- common/autotest_common.sh@1099 -- # fio_test_suite '' 00:35:25.380 13:19:29 -- bdev/blockdev.sh@331 -- # local env_context 00:35:25.380 13:19:29 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:35:25.380 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:35:25.380 13:19:29 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:25.380 13:19:29 -- bdev/blockdev.sh@339 -- # echo '' 00:35:25.380 13:19:29 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:35:25.380 13:19:29 -- bdev/blockdev.sh@339 -- # env_context= 00:35:25.380 13:19:29 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1254 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:25.380 13:19:29 -- common/autotest_common.sh@1255 -- # local workload=verify 00:35:25.380 13:19:29 -- common/autotest_common.sh@1256 -- # local bdev_type=AIO 00:35:25.380 13:19:29 -- common/autotest_common.sh@1257 -- # local env_context= 00:35:25.380 13:19:29 -- common/autotest_common.sh@1258 -- # local fio_dir=/usr/src/fio 00:35:25.380 13:19:29 -- common/autotest_common.sh@1260 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1265 -- # '[' -z verify ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1269 -- # '[' -n '' ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1273 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:25.380 13:19:29 -- common/autotest_common.sh@1275 -- # cat 00:35:25.380 13:19:29 -- common/autotest_common.sh@1287 -- # '[' verify == verify ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1288 -- # cat 00:35:25.380 13:19:29 -- common/autotest_common.sh@1297 -- # '[' AIO == AIO ']' 00:35:25.380 13:19:29 -- common/autotest_common.sh@1298 -- # /usr/src/fio/fio --version 00:35:25.380 13:19:29 -- common/autotest_common.sh@1298 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:25.380 13:19:29 -- common/autotest_common.sh@1299 -- # echo serialize_overlap=1 00:35:25.380 13:19:29 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:35:25.380 13:19:29 -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:35:25.380 13:19:29 -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:35:25.380 13:19:29 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:35:25.381 13:19:29 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:25.381 13:19:29 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:35:25.381 13:19:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:25.381 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:35:25.639 ************************************ 00:35:25.639 START TEST bdev_fio_rw_verify 00:35:25.639 ************************************ 00:35:25.639 13:19:29 -- common/autotest_common.sh@1099 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:25.639 13:19:29 -- common/autotest_common.sh@1330 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:25.639 13:19:29 -- common/autotest_common.sh@1311 -- # local fio_dir=/usr/src/fio 00:35:25.639 13:19:29 -- common/autotest_common.sh@1313 -- # sanitizers=(libasan libclang_rt.asan) 00:35:25.639 13:19:29 -- common/autotest_common.sh@1313 -- # local sanitizers 00:35:25.639 13:19:29 -- common/autotest_common.sh@1314 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:25.640 13:19:29 -- common/autotest_common.sh@1315 -- # shift 00:35:25.640 13:19:29 -- common/autotest_common.sh@1317 -- # local asan_lib= 00:35:25.640 13:19:29 -- common/autotest_common.sh@1318 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.640 13:19:29 -- common/autotest_common.sh@1319 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:25.640 13:19:29 -- common/autotest_common.sh@1319 -- # grep libasan 00:35:25.640 13:19:29 -- common/autotest_common.sh@1319 -- # awk '{print $3}' 00:35:25.640 13:19:29 -- common/autotest_common.sh@1319 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:35:25.640 13:19:29 -- common/autotest_common.sh@1320 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:35:25.640 13:19:29 -- common/autotest_common.sh@1321 -- # break 00:35:25.640 13:19:29 -- common/autotest_common.sh@1326 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:25.640 13:19:29 -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:25.640 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:25.640 fio-3.35 00:35:25.640 Starting 1 thread 00:35:37.842 00:35:37.842 job_raid5f: (groupid=0, jobs=1): err= 0: pid=151673: Wed Apr 17 13:19:40 2024 00:35:37.842 read: IOPS=8959, BW=35.0MiB/s (36.7MB/s)(350MiB/10001msec) 00:35:37.842 slat (usec): min=20, max=2338, avg=26.84, stdev= 9.50 00:35:37.842 clat (usec): min=12, max=2586, avg=175.35, stdev=71.37 00:35:37.842 lat (usec): min=37, max=2621, avg=202.19, stdev=73.40 00:35:37.842 clat percentiles (usec): 00:35:37.842 | 50.000th=[ 176], 99.000th=[ 330], 99.900th=[ 586], 99.990th=[ 881], 00:35:37.842 | 99.999th=[ 2573] 00:35:37.842 write: IOPS=9415, BW=36.8MiB/s (38.6MB/s)(363MiB/9882msec); 0 zone resets 00:35:37.842 slat (usec): min=10, max=740, avg=23.46, stdev= 6.69 00:35:37.842 clat (usec): min=71, max=1549, avg=402.80, stdev=74.84 00:35:37.842 lat (usec): min=91, max=1617, avg=426.25, stdev=78.02 00:35:37.842 clat percentiles (usec): 00:35:37.842 | 50.000th=[ 400], 99.000th=[ 652], 99.900th=[ 971], 99.990th=[ 1385], 00:35:37.842 | 99.999th=[ 1549] 00:35:37.842 bw ( KiB/s): min=27192, max=40720, per=98.56%, avg=37118.74, stdev=3354.82, samples=19 00:35:37.842 iops : min= 6798, max=10180, avg=9279.68, stdev=838.70, samples=19 00:35:37.842 lat (usec) : 20=0.01%, 50=0.01%, 100=7.21%, 250=34.40%, 500=54.51% 00:35:37.842 lat (usec) : 750=3.58%, 1000=0.26% 00:35:37.842 lat (msec) : 2=0.03%, 4=0.01% 00:35:37.842 cpu : usr=99.63%, sys=0.29%, ctx=38, majf=0, minf=6390 00:35:37.842 IO depths : 1=7.6%, 2=19.7%, 4=55.4%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:37.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.842 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.842 issued rwts: total=89607,93044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:37.842 00:35:37.842 Run status group 0 (all jobs): 00:35:37.842 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=350MiB (367MB), run=10001-10001msec 00:35:37.842 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=363MiB (381MB), run=9882-9882msec 00:35:38.410 ----------------------------------------------------- 00:35:38.410 Suppressions used: 00:35:38.410 count bytes template 00:35:38.410 1 7 /usr/src/fio/parse.c 00:35:38.410 651 62496 /usr/src/fio/iolog.c 00:35:38.410 2 596 libcrypto.so 00:35:38.410 ----------------------------------------------------- 00:35:38.410 00:35:38.410 ************************************ 00:35:38.410 END TEST bdev_fio_rw_verify 00:35:38.410 ************************************ 00:35:38.410 00:35:38.410 real 0m12.783s 00:35:38.410 user 0m13.380s 00:35:38.410 sys 0m0.706s 00:35:38.410 13:19:42 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:38.410 13:19:42 -- common/autotest_common.sh@10 -- # set +x 00:35:38.410 13:19:42 -- bdev/blockdev.sh@350 -- # rm -f 00:35:38.410 13:19:42 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:38.410 13:19:42 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1254 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:38.410 13:19:42 -- common/autotest_common.sh@1255 -- # local workload=trim 00:35:38.410 13:19:42 -- common/autotest_common.sh@1256 -- # local bdev_type= 00:35:38.410 13:19:42 -- common/autotest_common.sh@1257 -- # local env_context= 00:35:38.410 13:19:42 -- common/autotest_common.sh@1258 -- # local fio_dir=/usr/src/fio 00:35:38.410 13:19:42 -- common/autotest_common.sh@1260 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1265 -- # '[' -z trim ']' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1269 -- # '[' -n '' ']' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1273 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:38.410 13:19:42 -- common/autotest_common.sh@1275 -- # cat 00:35:38.410 13:19:42 -- common/autotest_common.sh@1287 -- # '[' trim == verify ']' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1302 -- # '[' trim == trim ']' 00:35:38.410 13:19:42 -- common/autotest_common.sh@1303 -- # echo rw=trimwrite 00:35:38.410 13:19:42 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:35:38.410 13:19:42 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5e53b383-af51-4ffd-b207-2db1e008cbd1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5e53b383-af51-4ffd-b207-2db1e008cbd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5e53b383-af51-4ffd-b207-2db1e008cbd1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "de43396f-5427-4640-833b-61d46a71840a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ce313051-7f73-45ae-81f9-e99c92b5c21b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "63635f5e-9b24-4801-b9af-5749f56bcab4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:38.410 13:19:42 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:35:38.410 13:19:42 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:38.410 /home/vagrant/spdk_repo/spdk 00:35:38.410 ************************************ 00:35:38.410 END TEST bdev_fio 00:35:38.410 ************************************ 00:35:38.410 13:19:42 -- bdev/blockdev.sh@362 -- # popd 00:35:38.410 13:19:42 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:35:38.410 13:19:42 -- bdev/blockdev.sh@364 -- # return 0 00:35:38.410 00:35:38.410 real 0m12.984s 00:35:38.410 user 0m13.512s 00:35:38.410 sys 0m0.772s 00:35:38.410 13:19:42 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:38.410 13:19:42 -- common/autotest_common.sh@10 -- # set +x 00:35:38.410 13:19:42 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:38.411 13:19:42 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:38.411 13:19:42 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:35:38.411 13:19:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:38.411 13:19:42 -- common/autotest_common.sh@10 -- # set +x 00:35:38.411 ************************************ 00:35:38.411 START TEST bdev_verify 00:35:38.411 ************************************ 00:35:38.411 13:19:42 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:38.669 [2024-04-17 13:19:42.576664] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:38.669 [2024-04-17 13:19:42.577018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151869 ] 00:35:38.669 [2024-04-17 13:19:42.752879] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:38.928 [2024-04-17 13:19:42.962675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.928 [2024-04-17 13:19:42.962684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.928 [2024-04-17 13:19:43.012990] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:39.495 [2024-04-17 13:19:43.482071] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:39.495 Running I/O for 5 seconds... 00:35:44.762 00:35:44.762 Latency(us) 00:35:44.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.762 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:44.762 Verification LBA range: start 0x0 length 0x2000 00:35:44.762 raid5f : 5.01 7420.49 28.99 0.00 0.00 26010.91 222.49 23592.96 00:35:44.762 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:44.762 Verification LBA range: start 0x2000 length 0x2000 00:35:44.762 raid5f : 5.02 7458.78 29.14 0.00 0.00 25666.84 230.87 23235.49 00:35:44.762 =================================================================================================================== 00:35:44.762 Total : 14879.27 58.12 0.00 0.00 25838.35 222.49 23592.96 00:35:46.138 ************************************ 00:35:46.138 END TEST bdev_verify 00:35:46.138 ************************************ 00:35:46.138 00:35:46.138 real 0m7.366s 00:35:46.138 user 0m13.460s 00:35:46.138 sys 0m0.284s 00:35:46.138 13:19:49 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:46.138 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:35:46.138 13:19:49 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:46.138 13:19:49 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:35:46.138 13:19:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:46.138 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:35:46.138 ************************************ 00:35:46.138 START TEST bdev_verify_big_io 00:35:46.138 ************************************ 00:35:46.138 13:19:49 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:46.138 [2024-04-17 13:19:50.015242] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:46.139 [2024-04-17 13:19:50.015585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151974 ] 00:35:46.139 [2024-04-17 13:19:50.188858] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:46.396 [2024-04-17 13:19:50.442131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.396 [2024-04-17 13:19:50.442146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.396 [2024-04-17 13:19:50.492681] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:46.964 [2024-04-17 13:19:50.967598] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:46.964 Running I/O for 5 seconds... 00:35:52.279 00:35:52.279 Latency(us) 00:35:52.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.279 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.279 Verification LBA range: start 0x0 length 0x200 00:35:52.279 raid5f : 5.31 454.02 28.38 0.00 0.00 6956508.00 170.36 299320.79 00:35:52.279 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.279 Verification LBA range: start 0x200 length 0x200 00:35:52.279 raid5f : 5.22 461.94 28.87 0.00 0.00 6816836.08 233.66 303133.79 00:35:52.279 =================================================================================================================== 00:35:52.279 Total : 915.96 57.25 0.00 0.00 6886657.56 170.36 303133.79 00:35:53.654 ************************************ 00:35:53.654 END TEST bdev_verify_big_io 00:35:53.654 ************************************ 00:35:53.654 00:35:53.654 real 0m7.790s 00:35:53.654 user 0m14.258s 00:35:53.654 sys 0m0.280s 00:35:53.654 13:19:57 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:53.654 13:19:57 -- common/autotest_common.sh@10 -- # set +x 00:35:53.654 13:19:57 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:53.654 13:19:57 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:35:53.654 13:19:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:53.654 13:19:57 -- common/autotest_common.sh@10 -- # set +x 00:35:53.912 ************************************ 00:35:53.912 START TEST bdev_write_zeroes 00:35:53.912 ************************************ 00:35:53.912 13:19:57 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:53.912 [2024-04-17 13:19:57.881217] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:53.912 [2024-04-17 13:19:57.881644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152105 ] 00:35:53.912 [2024-04-17 13:19:58.046717] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.171 [2024-04-17 13:19:58.262532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.171 [2024-04-17 13:19:58.312282] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:54.737 [2024-04-17 13:19:58.795000] rpc.c: 223:set_server_active_flag: *ERROR*: No server listening on provided address: (null) 00:35:54.737 Running I/O for 1 seconds... 00:35:55.673 00:35:55.673 Latency(us) 00:35:55.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.673 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:55.673 raid5f : 1.01 21459.30 83.83 0.00 0.00 5942.59 1630.95 6821.70 00:35:55.673 =================================================================================================================== 00:35:55.673 Total : 21459.30 83.83 0.00 0.00 5942.59 1630.95 6821.70 00:35:57.600 ************************************ 00:35:57.600 END TEST bdev_write_zeroes 00:35:57.600 ************************************ 00:35:57.600 00:35:57.600 real 0m3.426s 00:35:57.600 user 0m3.058s 00:35:57.600 sys 0m0.241s 00:35:57.600 13:20:01 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:57.600 13:20:01 -- common/autotest_common.sh@10 -- # set +x 00:35:57.600 13:20:01 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:57.600 13:20:01 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:35:57.600 13:20:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:57.600 13:20:01 -- common/autotest_common.sh@10 -- # set +x 00:35:57.600 ************************************ 00:35:57.600 START TEST bdev_json_nonenclosed 00:35:57.600 ************************************ 00:35:57.600 13:20:01 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:57.600 [2024-04-17 13:20:01.381931] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:57.600 [2024-04-17 13:20:01.382386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152192 ] 00:35:57.600 [2024-04-17 13:20:01.545578] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.858 [2024-04-17 13:20:01.787975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.858 [2024-04-17 13:20:01.788243] json_config.c: 582:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:57.858 [2024-04-17 13:20:01.788394] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:57.858 [2024-04-17 13:20:01.788452] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:58.117 00:35:58.117 real 0m0.864s 00:35:58.117 user 0m0.626s 00:35:58.117 sys 0m0.136s 00:35:58.117 13:20:02 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:58.117 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:35:58.117 ************************************ 00:35:58.117 END TEST bdev_json_nonenclosed 00:35:58.117 ************************************ 00:35:58.117 13:20:02 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:58.117 13:20:02 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:35:58.117 13:20:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:35:58.117 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:35:58.117 ************************************ 00:35:58.117 START TEST bdev_json_nonarray 00:35:58.117 ************************************ 00:35:58.117 13:20:02 -- common/autotest_common.sh@1099 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:58.376 [2024-04-17 13:20:02.320856] Starting SPDK v24.05-pre git sha1 2b97e37d6 / DPDK 23.11.0 initialization... 00:35:58.376 [2024-04-17 13:20:02.321210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152225 ] 00:35:58.376 [2024-04-17 13:20:02.482572] app.c: 821:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.634 [2024-04-17 13:20:02.693328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.634 [2024-04-17 13:20:02.693653] json_config.c: 588:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:58.634 [2024-04-17 13:20:02.693797] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:58.634 [2024-04-17 13:20:02.693854] app.c: 959:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:59.202 ************************************ 00:35:59.202 END TEST bdev_json_nonarray 00:35:59.202 ************************************ 00:35:59.202 00:35:59.202 real 0m0.837s 00:35:59.202 user 0m0.604s 00:35:59.202 sys 0m0.132s 00:35:59.202 13:20:03 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:59.202 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:35:59.202 13:20:03 -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:35:59.202 13:20:03 -- bdev/blockdev.sh@811 -- # cleanup 00:35:59.202 13:20:03 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:59.202 13:20:03 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:59.202 13:20:03 -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:35:59.202 13:20:03 -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:35:59.202 ************************************ 00:35:59.202 END TEST blockdev_raid5f 00:35:59.202 ************************************ 00:35:59.202 00:35:59.202 real 0m49.648s 00:35:59.202 user 1m7.903s 00:35:59.202 sys 0m4.547s 00:35:59.202 13:20:03 -- common/autotest_common.sh@1100 -- # xtrace_disable 00:35:59.202 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:35:59.202 13:20:03 -- spdk/autotest.sh@377 -- # trap - SIGINT SIGTERM EXIT 00:35:59.202 13:20:03 -- spdk/autotest.sh@379 -- # timing_enter post_cleanup 00:35:59.202 13:20:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:59.202 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:35:59.202 13:20:03 -- spdk/autotest.sh@380 -- # autotest_cleanup 00:35:59.202 13:20:03 -- common/autotest_common.sh@1366 -- # local autotest_es=0 00:35:59.202 13:20:03 -- common/autotest_common.sh@1367 -- # xtrace_disable 00:35:59.202 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:36:00.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:36:00.578 Waiting for block devices as requested 00:36:00.578 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:01.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:36:01.157 Cleaning 00:36:01.157 Removing: /var/run/dpdk/spdk0/config 00:36:01.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:01.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:01.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:01.157 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:01.157 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:01.157 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:01.157 Removing: /dev/shm/spdk_tgt_trace.pid109622 00:36:01.157 Removing: /var/run/dpdk/spdk0 00:36:01.157 Removing: /var/run/dpdk/spdk_pid109324 00:36:01.157 Removing: /var/run/dpdk/spdk_pid109622 00:36:01.157 Removing: /var/run/dpdk/spdk_pid109957 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110226 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110425 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110541 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110669 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110793 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110920 00:36:01.157 Removing: /var/run/dpdk/spdk_pid110990 00:36:01.157 Removing: /var/run/dpdk/spdk_pid111044 00:36:01.157 Removing: /var/run/dpdk/spdk_pid111129 00:36:01.157 Removing: /var/run/dpdk/spdk_pid111276 00:36:01.157 Removing: /var/run/dpdk/spdk_pid111864 00:36:01.157 Removing: /var/run/dpdk/spdk_pid111947 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112029 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112050 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112260 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112288 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112459 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112480 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112560 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112588 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112683 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112706 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112911 00:36:01.157 Removing: /var/run/dpdk/spdk_pid112984 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113030 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113121 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113221 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113267 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113385 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113451 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113506 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113574 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113648 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113710 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113774 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113832 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113914 00:36:01.157 Removing: /var/run/dpdk/spdk_pid113974 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114035 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114090 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114171 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114232 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114294 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114349 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114431 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114496 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114560 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114636 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114691 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114795 00:36:01.416 Removing: /var/run/dpdk/spdk_pid114938 00:36:01.416 Removing: /var/run/dpdk/spdk_pid115151 00:36:01.416 Removing: /var/run/dpdk/spdk_pid115252 00:36:01.416 Removing: /var/run/dpdk/spdk_pid115336 00:36:01.416 Removing: /var/run/dpdk/spdk_pid116674 00:36:01.416 Removing: /var/run/dpdk/spdk_pid116910 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117160 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117315 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117478 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117563 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117605 00:36:01.416 Removing: /var/run/dpdk/spdk_pid117647 00:36:01.416 Removing: /var/run/dpdk/spdk_pid118185 00:36:01.416 Removing: /var/run/dpdk/spdk_pid118310 00:36:01.416 Removing: /var/run/dpdk/spdk_pid118424 00:36:01.416 Removing: /var/run/dpdk/spdk_pid118513 00:36:01.416 Removing: /var/run/dpdk/spdk_pid119849 00:36:01.416 Removing: /var/run/dpdk/spdk_pid120839 00:36:01.416 Removing: /var/run/dpdk/spdk_pid121827 00:36:01.416 Removing: /var/run/dpdk/spdk_pid123068 00:36:01.416 Removing: /var/run/dpdk/spdk_pid124271 00:36:01.416 Removing: /var/run/dpdk/spdk_pid125473 00:36:01.416 Removing: /var/run/dpdk/spdk_pid127126 00:36:01.416 Removing: /var/run/dpdk/spdk_pid128488 00:36:01.416 Removing: /var/run/dpdk/spdk_pid129816 00:36:01.416 Removing: /var/run/dpdk/spdk_pid130551 00:36:01.416 Removing: /var/run/dpdk/spdk_pid131166 00:36:01.416 Removing: /var/run/dpdk/spdk_pid131844 00:36:01.416 Removing: /var/run/dpdk/spdk_pid132355 00:36:01.416 Removing: /var/run/dpdk/spdk_pid132992 00:36:01.416 Removing: /var/run/dpdk/spdk_pid133622 00:36:01.416 Removing: /var/run/dpdk/spdk_pid134382 00:36:01.416 Removing: /var/run/dpdk/spdk_pid134951 00:36:01.416 Removing: /var/run/dpdk/spdk_pid136483 00:36:01.416 Removing: /var/run/dpdk/spdk_pid137146 00:36:01.416 Removing: /var/run/dpdk/spdk_pid137762 00:36:01.416 Removing: /var/run/dpdk/spdk_pid139458 00:36:01.416 Removing: /var/run/dpdk/spdk_pid140192 00:36:01.416 Removing: /var/run/dpdk/spdk_pid140864 00:36:01.416 Removing: /var/run/dpdk/spdk_pid141686 00:36:01.416 Removing: /var/run/dpdk/spdk_pid141748 00:36:01.416 Removing: /var/run/dpdk/spdk_pid141799 00:36:01.416 Removing: /var/run/dpdk/spdk_pid141878 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142023 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142190 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142435 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142741 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142760 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142826 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142864 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142902 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142933 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142961 00:36:01.416 Removing: /var/run/dpdk/spdk_pid142993 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143040 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143072 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143100 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143139 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143170 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143219 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143258 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143285 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143318 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143370 00:36:01.416 Removing: /var/run/dpdk/spdk_pid143404 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143436 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143488 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143524 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143584 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143676 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143734 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143761 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143810 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143859 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143881 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143949 00:36:01.417 Removing: /var/run/dpdk/spdk_pid143980 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144027 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144075 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144100 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144128 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144153 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144181 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144229 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144253 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144308 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144359 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144388 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144456 00:36:01.417 Removing: /var/run/dpdk/spdk_pid144484 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144510 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144579 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144616 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144669 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144696 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144720 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144748 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144773 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144817 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144842 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144866 00:36:01.676 Removing: /var/run/dpdk/spdk_pid144973 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145092 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145281 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145321 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145378 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145468 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145504 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145537 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145567 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145632 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145662 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145762 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145825 00:36:01.676 Removing: /var/run/dpdk/spdk_pid145894 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146217 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146363 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146421 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146516 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146625 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146680 00:36:01.676 Removing: /var/run/dpdk/spdk_pid146962 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147074 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147201 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147281 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147323 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147414 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147895 00:36:01.676 Removing: /var/run/dpdk/spdk_pid147970 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148330 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148435 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148561 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148622 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148678 00:36:01.676 Removing: /var/run/dpdk/spdk_pid148712 00:36:01.676 Removing: /var/run/dpdk/spdk_pid150205 00:36:01.676 Removing: /var/run/dpdk/spdk_pid150350 00:36:01.676 Removing: /var/run/dpdk/spdk_pid150355 00:36:01.676 Removing: /var/run/dpdk/spdk_pid150372 00:36:01.676 Removing: /var/run/dpdk/spdk_pid150882 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151001 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151186 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151262 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151328 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151640 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151869 00:36:01.676 Removing: /var/run/dpdk/spdk_pid151974 00:36:01.676 Removing: /var/run/dpdk/spdk_pid152105 00:36:01.676 Removing: /var/run/dpdk/spdk_pid152192 00:36:01.676 Removing: /var/run/dpdk/spdk_pid152225 00:36:01.676 Clean 00:36:01.935 13:20:05 -- common/autotest_common.sh@1425 -- # return 0 00:36:01.935 13:20:05 -- spdk/autotest.sh@381 -- # timing_exit post_cleanup 00:36:01.935 13:20:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:36:01.935 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:36:01.935 13:20:05 -- spdk/autotest.sh@383 -- # timing_exit autotest 00:36:01.935 13:20:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:36:01.935 13:20:05 -- common/autotest_common.sh@10 -- # set +x 00:36:01.935 13:20:05 -- spdk/autotest.sh@384 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:01.935 13:20:05 -- spdk/autotest.sh@386 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:01.935 13:20:05 -- spdk/autotest.sh@386 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:01.935 13:20:05 -- spdk/autotest.sh@388 -- # hash lcov 00:36:01.935 13:20:05 -- spdk/autotest.sh@388 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:01.935 13:20:05 -- spdk/autotest.sh@390 -- # hostname 00:36:01.935 13:20:05 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:02.195 geninfo: WARNING: invalid characters removed from testname! 00:36:58.512 13:20:54 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:58.512 13:21:01 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:01.038 13:21:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:05.222 13:21:08 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:08.516 13:21:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:11.802 13:21:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:15.086 13:21:19 -- spdk/autotest.sh@397 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:15.086 13:21:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:15.086 13:21:19 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:15.086 13:21:19 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.086 13:21:19 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.086 13:21:19 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:15.086 13:21:19 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:15.086 13:21:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:15.086 13:21:19 -- paths/export.sh@5 -- $ export PATH 00:37:15.086 13:21:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:15.086 13:21:19 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:37:15.086 13:21:19 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:15.086 13:21:19 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713360079.XXXXXX 00:37:15.086 13:21:19 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713360079.z9OeWV 00:37:15.086 13:21:19 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:15.086 13:21:19 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:37:15.086 13:21:19 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:37:15.086 13:21:19 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:37:15.086 13:21:19 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:37:15.086 13:21:19 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:15.086 13:21:19 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:37:15.086 13:21:19 -- common/autotest_common.sh@10 -- $ set +x 00:37:15.086 13:21:19 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:37:15.086 13:21:19 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:37:15.086 13:21:19 -- pm/common@17 -- $ local monitor 00:37:15.086 13:21:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:15.086 13:21:19 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153855 00:37:15.086 13:21:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:15.086 13:21:19 -- pm/common@21 -- $ date +%s 00:37:15.086 13:21:19 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153857 00:37:15.086 13:21:19 -- pm/common@26 -- $ sleep 1 00:37:15.086 13:21:19 -- pm/common@21 -- $ date +%s 00:37:15.086 13:21:19 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713360079 00:37:15.086 13:21:19 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713360079 00:37:15.086 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:37:15.086 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:37:15.086 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713360079_collect-vmstat.pm.log 00:37:15.086 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713360079_collect-cpu-load.pm.log 00:37:16.020 13:21:20 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:37:16.020 13:21:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:37:16.020 13:21:20 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:37:16.020 13:21:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:16.020 13:21:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:16.020 13:21:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:16.020 13:21:20 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:16.020 13:21:20 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:16.020 13:21:20 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:16.278 13:21:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:16.278 13:21:20 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:16.278 13:21:20 -- pm/common@30 -- $ signal_monitor_resources TERM 00:37:16.278 13:21:20 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:37:16.278 13:21:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:16.278 13:21:20 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:37:16.278 13:21:20 -- pm/common@45 -- $ pid=153862 00:37:16.278 13:21:20 -- pm/common@52 -- $ sudo kill -TERM 153862 00:37:16.278 13:21:20 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:16.278 13:21:20 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:37:16.278 13:21:20 -- pm/common@45 -- $ pid=153865 00:37:16.278 13:21:20 -- pm/common@52 -- $ sudo kill -TERM 153865 00:37:16.278 + [[ -n 2315 ]] 00:37:16.278 + sudo kill 2315 00:37:16.278 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:37:16.288 [Pipeline] } 00:37:16.305 [Pipeline] // timeout 00:37:16.310 [Pipeline] } 00:37:16.322 [Pipeline] // stage 00:37:16.327 [Pipeline] } 00:37:16.341 [Pipeline] // catchError 00:37:16.348 [Pipeline] stage 00:37:16.349 [Pipeline] { (Stop VM) 00:37:16.358 [Pipeline] sh 00:37:16.634 + vagrant halt 00:37:20.823 ==> default: Halting domain... 00:37:30.804 [Pipeline] sh 00:37:31.084 + vagrant destroy -f 00:37:35.316 ==> default: Removing domain... 00:37:35.327 [Pipeline] sh 00:37:35.605 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest_3/output 00:37:35.613 [Pipeline] } 00:37:35.630 [Pipeline] // stage 00:37:35.634 [Pipeline] } 00:37:35.649 [Pipeline] // dir 00:37:35.653 [Pipeline] } 00:37:35.665 [Pipeline] // wrap 00:37:35.670 [Pipeline] } 00:37:35.685 [Pipeline] // catchError 00:37:35.693 [Pipeline] stage 00:37:35.696 [Pipeline] { (Epilogue) 00:37:35.710 [Pipeline] sh 00:37:35.986 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:57.917 [Pipeline] catchError 00:37:57.919 [Pipeline] { 00:37:57.929 [Pipeline] sh 00:37:58.207 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:58.208 Artifacts sizes are good 00:37:58.216 [Pipeline] } 00:37:58.232 [Pipeline] // catchError 00:37:58.242 [Pipeline] archiveArtifacts 00:37:58.248 Archiving artifacts 00:37:58.638 [Pipeline] cleanWs 00:37:58.649 [WS-CLEANUP] Deleting project workspace... 00:37:58.649 [WS-CLEANUP] Deferred wipeout is used... 00:37:58.655 [WS-CLEANUP] done 00:37:58.656 [Pipeline] } 00:37:58.674 [Pipeline] // stage 00:37:58.680 [Pipeline] } 00:37:58.694 [Pipeline] // node 00:37:58.699 [Pipeline] End of Pipeline 00:37:58.737 Finished: SUCCESS